00:00:00.000  Started by upstream project "autotest-per-patch" build number 132368
00:00:00.000  originally caused by:
00:00:00.000   Started by user sys_sgci
00:00:00.104  Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/vfio-user-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy
00:00:00.105  The recommended git tool is: git
00:00:00.105  using credential 00000000-0000-0000-0000-000000000002
00:00:00.107   > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/vfio-user-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10
00:00:00.165  Fetching changes from the remote Git repository
00:00:00.167   > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10
00:00:00.223  Using shallow fetch with depth 1
00:00:00.223  Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool
00:00:00.223   > git --version # timeout=10
00:00:00.276   > git --version # 'git version 2.39.2'
00:00:00.276  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:00.303  Setting http proxy: proxy-dmz.intel.com:911
00:00:00.303   > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5
00:00:07.473   > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10
00:00:07.485   > git rev-parse FETCH_HEAD^{commit} # timeout=10
00:00:07.497  Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD)
00:00:07.497   > git config core.sparsecheckout # timeout=10
00:00:07.507   > git read-tree -mu HEAD # timeout=10
00:00:07.522   > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5
00:00:07.552  Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag"
00:00:07.553   > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10
00:00:07.675  [Pipeline] Start of Pipeline
00:00:07.692  [Pipeline] library
00:00:07.694  Loading library shm_lib@master
00:00:07.694  Library shm_lib@master is cached. Copying from home.
00:00:07.710  [Pipeline] node
00:00:07.725  Running on GP13 in /var/jenkins/workspace/vfio-user-phy-autotest
00:00:07.726  [Pipeline] {
00:00:07.734  [Pipeline] catchError
00:00:07.735  [Pipeline] {
00:00:07.745  [Pipeline] wrap
00:00:07.754  [Pipeline] {
00:00:07.759  [Pipeline] stage
00:00:07.760  [Pipeline] { (Prologue)
00:00:07.975  [Pipeline] sh
00:00:08.255  + logger -p user.info -t JENKINS-CI
00:00:08.275  [Pipeline] echo
00:00:08.276  Node: GP13
00:00:08.282  [Pipeline] sh
00:00:08.581  [Pipeline] setCustomBuildProperty
00:00:08.594  [Pipeline] echo
00:00:08.596  Cleanup processes
00:00:08.601  [Pipeline] sh
00:00:08.885  + sudo pgrep -af /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:00:08.885  1652060 sudo pgrep -af /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:00:08.898  [Pipeline] sh
00:00:09.186  ++ sudo pgrep -af /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:00:09.186  ++ grep -v 'sudo pgrep'
00:00:09.186  ++ awk '{print $1}'
00:00:09.186  + sudo kill -9
00:00:09.186  + true
00:00:09.201  [Pipeline] cleanWs
00:00:09.211  [WS-CLEANUP] Deleting project workspace...
00:00:09.211  [WS-CLEANUP] Deferred wipeout is used...
00:00:09.218  [WS-CLEANUP] done
00:00:09.223  [Pipeline] setCustomBuildProperty
00:00:09.240  [Pipeline] sh
00:00:09.525  + sudo git config --global --replace-all safe.directory '*'
00:00:09.616  [Pipeline] httpRequest
00:00:09.994  [Pipeline] echo
00:00:09.995  Sorcerer 10.211.164.20 is alive
00:00:10.005  [Pipeline] retry
00:00:10.007  [Pipeline] {
00:00:10.019  [Pipeline] httpRequest
00:00:10.022  HttpMethod: GET
00:00:10.023  URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:10.023  Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:10.037  Response Code: HTTP/1.1 200 OK
00:00:10.037  Success: Status code 200 is in the accepted range: 200,404
00:00:10.038  Saving response body to /var/jenkins/workspace/vfio-user-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:17.051  [Pipeline] }
00:00:17.066  [Pipeline] // retry
00:00:17.073  [Pipeline] sh
00:00:17.357  + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:17.372  [Pipeline] httpRequest
00:00:17.772  [Pipeline] echo
00:00:17.774  Sorcerer 10.211.164.20 is alive
00:00:17.783  [Pipeline] retry
00:00:17.785  [Pipeline] {
00:00:17.799  [Pipeline] httpRequest
00:00:17.803  HttpMethod: GET
00:00:17.803  URL: http://10.211.164.20/packages/spdk_a5dab6cf7998a288aafc8366202b334b4ac5d08c.tar.gz
00:00:17.804  Sending request to url: http://10.211.164.20/packages/spdk_a5dab6cf7998a288aafc8366202b334b4ac5d08c.tar.gz
00:00:17.819  Response Code: HTTP/1.1 200 OK
00:00:17.819  Success: Status code 200 is in the accepted range: 200,404
00:00:17.820  Saving response body to /var/jenkins/workspace/vfio-user-phy-autotest/spdk_a5dab6cf7998a288aafc8366202b334b4ac5d08c.tar.gz
00:01:24.850  [Pipeline] }
00:01:24.868  [Pipeline] // retry
00:01:24.876  [Pipeline] sh
00:01:25.167  + tar --no-same-owner -xf spdk_a5dab6cf7998a288aafc8366202b334b4ac5d08c.tar.gz
00:01:28.468  [Pipeline] sh
00:01:28.759  + git -C spdk log --oneline -n5
00:01:28.759  a5dab6cf7 test/nvme/xnvme: Make sure nvme selected for tests is not used
00:01:28.759  876509865 test/nvme/xnvme: Test all conserve_cpu variants
00:01:28.759  a25b16198 test/nvme/xnvme: Enable polling in nvme driver
00:01:28.759  bb53e3ad9 test/nvme/xnvme: Drop null_blk
00:01:28.759  ace52fb4b test/nvme/xnvme: Tidy the test suite
00:01:28.772  [Pipeline] }
00:01:28.786  [Pipeline] // stage
00:01:28.798  [Pipeline] stage
00:01:28.801  [Pipeline] { (Prepare)
00:01:28.825  [Pipeline] writeFile
00:01:28.843  [Pipeline] sh
00:01:29.135  + logger -p user.info -t JENKINS-CI
00:01:29.150  [Pipeline] sh
00:01:29.437  + logger -p user.info -t JENKINS-CI
00:01:29.452  [Pipeline] sh
00:01:29.744  + cat autorun-spdk.conf
00:01:29.744  SPDK_RUN_FUNCTIONAL_TEST=1
00:01:29.744  SPDK_TEST_VFIOUSER_QEMU=1
00:01:29.744  SPDK_RUN_ASAN=1
00:01:29.744  SPDK_RUN_UBSAN=1
00:01:29.744  SPDK_TEST_SMA=1
00:01:29.752  RUN_NIGHTLY=0
00:01:29.757  [Pipeline] readFile
00:01:29.781  [Pipeline] copyArtifacts
00:01:32.750  Copied 1 artifact from "qemu-vfio" build number 34
00:01:32.756  [Pipeline] sh
00:01:33.059  + tar xf qemu-vfio.tar.gz
00:01:35.625  [Pipeline] copyArtifacts
00:01:35.650  Copied 1 artifact from "vagrant-build-vhost" build number 6
00:01:35.655  [Pipeline] sh
00:01:35.942  + sudo mkdir -p /var/spdk/dependencies/vhost
00:01:35.956  [Pipeline] sh
00:01:36.244  + cd /var/spdk/dependencies/vhost
00:01:36.244  + md5sum --quiet -c /var/jenkins/workspace/vfio-user-phy-autotest/spdk_test_image.qcow2.gz.md5
00:01:40.461  [Pipeline] withEnv
00:01:40.463  [Pipeline] {
00:01:40.476  [Pipeline] sh
00:01:40.761  + set -ex
00:01:40.761  + [[ -f /var/jenkins/workspace/vfio-user-phy-autotest/autorun-spdk.conf ]]
00:01:40.761  + source /var/jenkins/workspace/vfio-user-phy-autotest/autorun-spdk.conf
00:01:40.761  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:01:40.761  ++ SPDK_TEST_VFIOUSER_QEMU=1
00:01:40.761  ++ SPDK_RUN_ASAN=1
00:01:40.761  ++ SPDK_RUN_UBSAN=1
00:01:40.761  ++ SPDK_TEST_SMA=1
00:01:40.761  ++ RUN_NIGHTLY=0
00:01:40.761  + case $SPDK_TEST_NVMF_NICS in
00:01:40.761  + DRIVERS=
00:01:40.761  + [[ -n '' ]]
00:01:40.761  + exit 0
00:01:40.771  [Pipeline] }
00:01:40.786  [Pipeline] // withEnv
00:01:40.791  [Pipeline] }
00:01:40.804  [Pipeline] // stage
00:01:40.813  [Pipeline] catchError
00:01:40.815  [Pipeline] {
00:01:40.829  [Pipeline] timeout
00:01:40.829  Timeout set to expire in 35 min
00:01:40.831  [Pipeline] {
00:01:40.845  [Pipeline] stage
00:01:40.847  [Pipeline] { (Tests)
00:01:40.861  [Pipeline] sh
00:01:41.161  + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/vfio-user-phy-autotest
00:01:41.161  ++ readlink -f /var/jenkins/workspace/vfio-user-phy-autotest
00:01:41.161  + DIR_ROOT=/var/jenkins/workspace/vfio-user-phy-autotest
00:01:41.161  + [[ -n /var/jenkins/workspace/vfio-user-phy-autotest ]]
00:01:41.161  + DIR_SPDK=/var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:01:41.161  + DIR_OUTPUT=/var/jenkins/workspace/vfio-user-phy-autotest/output
00:01:41.161  + [[ -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk ]]
00:01:41.161  + [[ ! -d /var/jenkins/workspace/vfio-user-phy-autotest/output ]]
00:01:41.161  + mkdir -p /var/jenkins/workspace/vfio-user-phy-autotest/output
00:01:41.161  + [[ -d /var/jenkins/workspace/vfio-user-phy-autotest/output ]]
00:01:41.161  + [[ vfio-user-phy-autotest == pkgdep-* ]]
00:01:41.161  + cd /var/jenkins/workspace/vfio-user-phy-autotest
00:01:41.161  + source /etc/os-release
00:01:41.161  ++ NAME='Fedora Linux'
00:01:41.161  ++ VERSION='39 (Cloud Edition)'
00:01:41.161  ++ ID=fedora
00:01:41.161  ++ VERSION_ID=39
00:01:41.161  ++ VERSION_CODENAME=
00:01:41.161  ++ PLATFORM_ID=platform:f39
00:01:41.161  ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)'
00:01:41.161  ++ ANSI_COLOR='0;38;2;60;110;180'
00:01:41.161  ++ LOGO=fedora-logo-icon
00:01:41.161  ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39
00:01:41.161  ++ HOME_URL=https://fedoraproject.org/
00:01:41.161  ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/
00:01:41.161  ++ SUPPORT_URL=https://ask.fedoraproject.org/
00:01:41.161  ++ BUG_REPORT_URL=https://bugzilla.redhat.com/
00:01:41.161  ++ REDHAT_BUGZILLA_PRODUCT=Fedora
00:01:41.161  ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39
00:01:41.161  ++ REDHAT_SUPPORT_PRODUCT=Fedora
00:01:41.161  ++ REDHAT_SUPPORT_PRODUCT_VERSION=39
00:01:41.161  ++ SUPPORT_END=2024-11-12
00:01:41.161  ++ VARIANT='Cloud Edition'
00:01:41.161  ++ VARIANT_ID=cloud
00:01:41.161  + uname -a
00:01:41.161  Linux spdk-gp-13 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux
00:01:41.161  + sudo /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh status
00:01:42.541  Hugepages
00:01:42.541  node     hugesize     free /  total
00:01:42.541  node0   1048576kB        0 /      0
00:01:42.541  node0      2048kB        0 /      0
00:01:42.541  node1   1048576kB        0 /      0
00:01:42.541  node1      2048kB        0 /      0
00:01:42.541  
00:01:42.541  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:01:42.541  I/OAT                     0000:00:04.0    8086   0e20   0       ioatdma          -          -
00:01:42.541  I/OAT                     0000:00:04.1    8086   0e21   0       ioatdma          -          -
00:01:42.541  I/OAT                     0000:00:04.2    8086   0e22   0       ioatdma          -          -
00:01:42.541  I/OAT                     0000:00:04.3    8086   0e23   0       ioatdma          -          -
00:01:42.541  I/OAT                     0000:00:04.4    8086   0e24   0       ioatdma          -          -
00:01:42.541  I/OAT                     0000:00:04.5    8086   0e25   0       ioatdma          -          -
00:01:42.541  I/OAT                     0000:00:04.6    8086   0e26   0       ioatdma          -          -
00:01:42.541  I/OAT                     0000:00:04.7    8086   0e27   0       ioatdma          -          -
00:01:42.541  I/OAT                     0000:80:04.0    8086   0e20   1       ioatdma          -          -
00:01:42.541  I/OAT                     0000:80:04.1    8086   0e21   1       ioatdma          -          -
00:01:42.541  I/OAT                     0000:80:04.2    8086   0e22   1       ioatdma          -          -
00:01:42.541  I/OAT                     0000:80:04.3    8086   0e23   1       ioatdma          -          -
00:01:42.541  I/OAT                     0000:80:04.4    8086   0e24   1       ioatdma          -          -
00:01:42.541  I/OAT                     0000:80:04.5    8086   0e25   1       ioatdma          -          -
00:01:42.541  I/OAT                     0000:80:04.6    8086   0e26   1       ioatdma          -          -
00:01:42.541  I/OAT                     0000:80:04.7    8086   0e27   1       ioatdma          -          -
00:01:42.541  NVMe                      0000:85:00.0    8086   0a54   1       nvme             nvme0      nvme0n1
00:01:42.541  + rm -f /tmp/spdk-ld-path
00:01:42.541  + source autorun-spdk.conf
00:01:42.541  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:01:42.541  ++ SPDK_TEST_VFIOUSER_QEMU=1
00:01:42.541  ++ SPDK_RUN_ASAN=1
00:01:42.541  ++ SPDK_RUN_UBSAN=1
00:01:42.541  ++ SPDK_TEST_SMA=1
00:01:42.541  ++ RUN_NIGHTLY=0
00:01:42.541  + ((  SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1  ))
00:01:42.541  + [[ -n '' ]]
00:01:42.541  + sudo git config --global --add safe.directory /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:01:42.541  + for M in /var/spdk/build-*-manifest.txt
00:01:42.541  + [[ -f /var/spdk/build-kernel-manifest.txt ]]
00:01:42.541  + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/vfio-user-phy-autotest/output/
00:01:42.541  + for M in /var/spdk/build-*-manifest.txt
00:01:42.541  + [[ -f /var/spdk/build-pkg-manifest.txt ]]
00:01:42.541  + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/vfio-user-phy-autotest/output/
00:01:42.541  + for M in /var/spdk/build-*-manifest.txt
00:01:42.541  + [[ -f /var/spdk/build-repo-manifest.txt ]]
00:01:42.541  + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/vfio-user-phy-autotest/output/
00:01:42.541  ++ uname
00:01:42.541  + [[ Linux == \L\i\n\u\x ]]
00:01:42.541  + sudo dmesg -T
00:01:42.541  + sudo dmesg --clear
00:01:42.541  + dmesg_pid=1653281
00:01:42.541  + [[ Fedora Linux == FreeBSD ]]
00:01:42.541  + export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:01:42.541  + UNBIND_ENTIRE_IOMMU_GROUP=yes
00:01:42.541  + sudo dmesg -Tw
00:01:42.541  + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:01:42.541  + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:01:42.541  + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:01:42.541  + [[ -x /usr/src/fio-static/fio ]]
00:01:42.541  + export FIO_BIN=/usr/src/fio-static/fio
00:01:42.541  + FIO_BIN=/usr/src/fio-static/fio
00:01:42.541  + [[ /var/jenkins/workspace/vfio-user-phy-autotest/qemu_vfio/bin/qemu-system-x86_64 == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\v\f\i\o\-\u\s\e\r\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]]
00:01:42.541  ++ ldd /var/jenkins/workspace/vfio-user-phy-autotest/qemu_vfio/bin/qemu-system-x86_64
00:01:42.541  + deps='	linux-vdso.so.1 (0x00007ffefa186000)
00:01:42.541  	libpixman-1.so.0 => /usr/lib64/libpixman-1.so.0 (0x00007f38301fc000)
00:01:42.541  	libz.so.1 => /usr/lib64/libz.so.1 (0x00007f38301e2000)
00:01:42.541  	libudev.so.1 => /usr/lib64/libudev.so.1 (0x00007f38301ab000)
00:01:42.541  	libpmem.so.1 => /usr/lib64/libpmem.so.1 (0x00007f3830152000)
00:01:42.541  	libdaxctl.so.1 => /usr/lib64/libdaxctl.so.1 (0x00007f3830145000)
00:01:42.541  	libnuma.so.1 => /usr/lib64/libnuma.so.1 (0x00007f3830136000)
00:01:42.541  	libgio-2.0.so.0 => /usr/lib64/libgio-2.0.so.0 (0x00007f382ff5c000)
00:01:42.541  	libgobject-2.0.so.0 => /usr/lib64/libgobject-2.0.so.0 (0x00007f382fefc000)
00:01:42.541  	libglib-2.0.so.0 => /usr/lib64/libglib-2.0.so.0 (0x00007f382fdb2000)
00:01:42.541  	librdmacm.so.1 => /usr/lib64/librdmacm.so.1 (0x00007f382fd96000)
00:01:42.541  	libibverbs.so.1 => /usr/lib64/libibverbs.so.1 (0x00007f382fd74000)
00:01:42.541  	libslirp.so.0 => /usr/lib64/libslirp.so.0 (0x00007f382fd52000)
00:01:42.541  	libbpf.so.0 => not found
00:01:42.541  	libncursesw.so.6 => /usr/lib64/libncursesw.so.6 (0x00007f382fd11000)
00:01:42.541  	libtinfo.so.6 => /usr/lib64/libtinfo.so.6 (0x00007f382fcdc000)
00:01:42.541  	libgmodule-2.0.so.0 => /usr/lib64/libgmodule-2.0.so.0 (0x00007f382fcd5000)
00:01:42.541  	liburing.so.2 => /usr/lib64/liburing.so.2 (0x00007f382fccd000)
00:01:42.541  	libfuse3.so.3 => /usr/lib64/libfuse3.so.3 (0x00007f382fc8b000)
00:01:42.541  	libiscsi.so.9 => /usr/lib64/iscsi/libiscsi.so.9 (0x00007f382fc5b000)
00:01:42.541  	libaio.so.1 => /usr/lib64/libaio.so.1 (0x00007f382fc56000)
00:01:42.541  	librbd.so.1 => /usr/lib64/librbd.so.1 (0x00007f382f39b000)
00:01:42.541  	librados.so.2 => /usr/lib64/librados.so.2 (0x00007f382f1d3000)
00:01:42.541  	libm.so.6 => /usr/lib64/libm.so.6 (0x00007f382f0f2000)
00:01:42.541  	libgcc_s.so.1 => /usr/lib64/libgcc_s.so.1 (0x00007f382f0cd000)
00:01:42.541  	libc.so.6 => /usr/lib64/libc.so.6 (0x00007f382eee9000)
00:01:42.541  	/lib64/ld-linux-x86-64.so.2 (0x00007f3831360000)
00:01:42.541  	libcap.so.2 => /usr/lib64/libcap.so.2 (0x00007f382eedf000)
00:01:42.541  	libndctl.so.6 => /usr/lib64/libndctl.so.6 (0x00007f382eeb2000)
00:01:42.541  	libuuid.so.1 => /usr/lib64/libuuid.so.1 (0x00007f382eea8000)
00:01:42.541  	libkmod.so.2 => /usr/lib64/libkmod.so.2 (0x00007f382ee8c000)
00:01:42.541  	libmount.so.1 => /usr/lib64/libmount.so.1 (0x00007f382ee39000)
00:01:42.541  	libselinux.so.1 => /usr/lib64/libselinux.so.1 (0x00007f382ee0c000)
00:01:42.541  	libffi.so.8 => /usr/lib64/libffi.so.8 (0x00007f382edfc000)
00:01:42.541  	libpcre2-8.so.0 => /usr/lib64/libpcre2-8.so.0 (0x00007f382ed61000)
00:01:42.541  	libnl-3.so.200 => /usr/lib64/libnl-3.so.200 (0x00007f382ed3c000)
00:01:42.541  	libnl-route-3.so.200 => /usr/lib64/libnl-route-3.so.200 (0x00007f382eca4000)
00:01:42.541  	libgcrypt.so.20 => /usr/lib64/libgcrypt.so.20 (0x00007f382eb6a000)
00:01:42.541  	libssl.so.3 => /usr/lib64/libssl.so.3 (0x00007f382eac7000)
00:01:42.541  	libcryptsetup.so.12 => /usr/lib64/libcryptsetup.so.12 (0x00007f382ea46000)
00:01:42.541  	libceph-common.so.2 => /usr/lib64/ceph/libceph-common.so.2 (0x00007f382de16000)
00:01:42.541  	libcrypto.so.3 => /usr/lib64/libcrypto.so.3 (0x00007f382d93d000)
00:01:42.541  	libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00007f382d6e7000)
00:01:42.541  	libzstd.so.1 => /usr/lib64/libzstd.so.1 (0x00007f382d628000)
00:01:42.541  	liblzma.so.5 => /usr/lib64/liblzma.so.5 (0x00007f382d5f5000)
00:01:42.541  	libblkid.so.1 => /usr/lib64/libblkid.so.1 (0x00007f382d5b9000)
00:01:42.541  	libgpg-error.so.0 => /usr/lib64/libgpg-error.so.0 (0x00007f382d593000)
00:01:42.541  	libdevmapper.so.1.02 => /usr/lib64/libdevmapper.so.1.02 (0x00007f382d534000)
00:01:42.541  	libargon2.so.1 => /usr/lib64/libargon2.so.1 (0x00007f382d52c000)
00:01:42.541  	libjson-c.so.5 => /usr/lib64/libjson-c.so.5 (0x00007f382d518000)
00:01:42.541  	libresolv.so.2 => /usr/lib64/libresolv.so.2 (0x00007f382d507000)
00:01:42.541  	libcurl.so.4 => /usr/lib64/libcurl.so.4 (0x00007f382d453000)
00:01:42.541  	libthrift-0.15.0.so => /usr/lib64/libthrift-0.15.0.so (0x00007f382d3b9000)
00:01:42.541  	libnghttp2.so.14 => /usr/lib64/libnghttp2.so.14 (0x00007f382d38c000)
00:01:42.541  	libidn2.so.0 => /usr/lib64/libidn2.so.0 (0x00007f382d36a000)
00:01:42.541  	libssh.so.4 => /usr/lib64/libssh.so.4 (0x00007f382d2f7000)
00:01:42.541  	libpsl.so.5 => /usr/lib64/libpsl.so.5 (0x00007f382d2e3000)
00:01:42.541  	libgssapi_krb5.so.2 => /usr/lib64/libgssapi_krb5.so.2 (0x00007f382d28d000)
00:01:42.541  	libldap.so.2 => /usr/lib64/libldap.so.2 (0x00007f382d226000)
00:01:42.541  	liblber.so.2 => /usr/lib64/liblber.so.2 (0x00007f382d214000)
00:01:42.541  	libbrotlidec.so.1 => /usr/lib64/libbrotlidec.so.1 (0x00007f382d206000)
00:01:42.541  	libunistring.so.5 => /usr/lib64/libunistring.so.5 (0x00007f382d056000)
00:01:42.541  	libkrb5.so.3 => /usr/lib64/libkrb5.so.3 (0x00007f382cf7d000)
00:01:42.541  	libk5crypto.so.3 => /usr/lib64/libk5crypto.so.3 (0x00007f382cf63000)
00:01:42.541  	libcom_err.so.2 => /usr/lib64/libcom_err.so.2 (0x00007f382cf5c000)
00:01:42.541  	libkrb5support.so.0 => /usr/lib64/libkrb5support.so.0 (0x00007f382cf4c000)
00:01:42.541  	libkeyutils.so.1 => /usr/lib64/libkeyutils.so.1 (0x00007f382cf45000)
00:01:42.541  	libevent-2.1.so.7 => /usr/lib64/libevent-2.1.so.7 (0x00007f382ceed000)
00:01:42.541  	libsasl2.so.3 => /usr/lib64/libsasl2.so.3 (0x00007f382cece000)
00:01:42.541  	libbrotlicommon.so.1 => /usr/lib64/libbrotlicommon.so.1 (0x00007f382cea9000)
00:01:42.541  	libcrypt.so.2 => /usr/lib64/libcrypt.so.2 (0x00007f382ce70000)'
00:01:42.541  + [[ 	linux-vdso.so.1 (0x00007ffefa186000)
00:01:42.541  	libpixman-1.so.0 => /usr/lib64/libpixman-1.so.0 (0x00007f38301fc000)
00:01:42.541  	libz.so.1 => /usr/lib64/libz.so.1 (0x00007f38301e2000)
00:01:42.542  	libudev.so.1 => /usr/lib64/libudev.so.1 (0x00007f38301ab000)
00:01:42.542  	libpmem.so.1 => /usr/lib64/libpmem.so.1 (0x00007f3830152000)
00:01:42.542  	libdaxctl.so.1 => /usr/lib64/libdaxctl.so.1 (0x00007f3830145000)
00:01:42.542  	libnuma.so.1 => /usr/lib64/libnuma.so.1 (0x00007f3830136000)
00:01:42.542  	libgio-2.0.so.0 => /usr/lib64/libgio-2.0.so.0 (0x00007f382ff5c000)
00:01:42.542  	libgobject-2.0.so.0 => /usr/lib64/libgobject-2.0.so.0 (0x00007f382fefc000)
00:01:42.542  	libglib-2.0.so.0 => /usr/lib64/libglib-2.0.so.0 (0x00007f382fdb2000)
00:01:42.542  	librdmacm.so.1 => /usr/lib64/librdmacm.so.1 (0x00007f382fd96000)
00:01:42.542  	libibverbs.so.1 => /usr/lib64/libibverbs.so.1 (0x00007f382fd74000)
00:01:42.542  	libslirp.so.0 => /usr/lib64/libslirp.so.0 (0x00007f382fd52000)
00:01:42.542  	libbpf.so.0 => not found
00:01:42.542  	libncursesw.so.6 => /usr/lib64/libncursesw.so.6 (0x00007f382fd11000)
00:01:42.542  	libtinfo.so.6 => /usr/lib64/libtinfo.so.6 (0x00007f382fcdc000)
00:01:42.542  	libgmodule-2.0.so.0 => /usr/lib64/libgmodule-2.0.so.0 (0x00007f382fcd5000)
00:01:42.542  	liburing.so.2 => /usr/lib64/liburing.so.2 (0x00007f382fccd000)
00:01:42.542  	libfuse3.so.3 => /usr/lib64/libfuse3.so.3 (0x00007f382fc8b000)
00:01:42.542  	libiscsi.so.9 => /usr/lib64/iscsi/libiscsi.so.9 (0x00007f382fc5b000)
00:01:42.542  	libaio.so.1 => /usr/lib64/libaio.so.1 (0x00007f382fc56000)
00:01:42.542  	librbd.so.1 => /usr/lib64/librbd.so.1 (0x00007f382f39b000)
00:01:42.542  	librados.so.2 => /usr/lib64/librados.so.2 (0x00007f382f1d3000)
00:01:42.542  	libm.so.6 => /usr/lib64/libm.so.6 (0x00007f382f0f2000)
00:01:42.542  	libgcc_s.so.1 => /usr/lib64/libgcc_s.so.1 (0x00007f382f0cd000)
00:01:42.542  	libc.so.6 => /usr/lib64/libc.so.6 (0x00007f382eee9000)
00:01:42.542  	/lib64/ld-linux-x86-64.so.2 (0x00007f3831360000)
00:01:42.542  	libcap.so.2 => /usr/lib64/libcap.so.2 (0x00007f382eedf000)
00:01:42.542  	libndctl.so.6 => /usr/lib64/libndctl.so.6 (0x00007f382eeb2000)
00:01:42.542  	libuuid.so.1 => /usr/lib64/libuuid.so.1 (0x00007f382eea8000)
00:01:42.542  	libkmod.so.2 => /usr/lib64/libkmod.so.2 (0x00007f382ee8c000)
00:01:42.542  	libmount.so.1 => /usr/lib64/libmount.so.1 (0x00007f382ee39000)
00:01:42.542  	libselinux.so.1 => /usr/lib64/libselinux.so.1 (0x00007f382ee0c000)
00:01:42.542  	libffi.so.8 => /usr/lib64/libffi.so.8 (0x00007f382edfc000)
00:01:42.542  	libpcre2-8.so.0 => /usr/lib64/libpcre2-8.so.0 (0x00007f382ed61000)
00:01:42.542  	libnl-3.so.200 => /usr/lib64/libnl-3.so.200 (0x00007f382ed3c000)
00:01:42.542  	libnl-route-3.so.200 => /usr/lib64/libnl-route-3.so.200 (0x00007f382eca4000)
00:01:42.542  	libgcrypt.so.20 => /usr/lib64/libgcrypt.so.20 (0x00007f382eb6a000)
00:01:42.542  	libssl.so.3 => /usr/lib64/libssl.so.3 (0x00007f382eac7000)
00:01:42.542  	libcryptsetup.so.12 => /usr/lib64/libcryptsetup.so.12 (0x00007f382ea46000)
00:01:42.542  	libceph-common.so.2 => /usr/lib64/ceph/libceph-common.so.2 (0x00007f382de16000)
00:01:42.542  	libcrypto.so.3 => /usr/lib64/libcrypto.so.3 (0x00007f382d93d000)
00:01:42.542  	libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00007f382d6e7000)
00:01:42.542  	libzstd.so.1 => /usr/lib64/libzstd.so.1 (0x00007f382d628000)
00:01:42.542  	liblzma.so.5 => /usr/lib64/liblzma.so.5 (0x00007f382d5f5000)
00:01:42.542  	libblkid.so.1 => /usr/lib64/libblkid.so.1 (0x00007f382d5b9000)
00:01:42.542  	libgpg-error.so.0 => /usr/lib64/libgpg-error.so.0 (0x00007f382d593000)
00:01:42.542  	libdevmapper.so.1.02 => /usr/lib64/libdevmapper.so.1.02 (0x00007f382d534000)
00:01:42.542  	libargon2.so.1 => /usr/lib64/libargon2.so.1 (0x00007f382d52c000)
00:01:42.542  	libjson-c.so.5 => /usr/lib64/libjson-c.so.5 (0x00007f382d518000)
00:01:42.542  	libresolv.so.2 => /usr/lib64/libresolv.so.2 (0x00007f382d507000)
00:01:42.542  	libcurl.so.4 => /usr/lib64/libcurl.so.4 (0x00007f382d453000)
00:01:42.542  	libthrift-0.15.0.so => /usr/lib64/libthrift-0.15.0.so (0x00007f382d3b9000)
00:01:42.542  	libnghttp2.so.14 => /usr/lib64/libnghttp2.so.14 (0x00007f382d38c000)
00:01:42.542  	libidn2.so.0 => /usr/lib64/libidn2.so.0 (0x00007f382d36a000)
00:01:42.542  	libssh.so.4 => /usr/lib64/libssh.so.4 (0x00007f382d2f7000)
00:01:42.542  	libpsl.so.5 => /usr/lib64/libpsl.so.5 (0x00007f382d2e3000)
00:01:42.542  	libgssapi_krb5.so.2 => /usr/lib64/libgssapi_krb5.so.2 (0x00007f382d28d000)
00:01:42.542  	libldap.so.2 => /usr/lib64/libldap.so.2 (0x00007f382d226000)
00:01:42.542  	liblber.so.2 => /usr/lib64/liblber.so.2 (0x00007f382d214000)
00:01:42.542  	libbrotlidec.so.1 => /usr/lib64/libbrotlidec.so.1 (0x00007f382d206000)
00:01:42.542  	libunistring.so.5 => /usr/lib64/libunistring.so.5 (0x00007f382d056000)
00:01:42.542  	libkrb5.so.3 => /usr/lib64/libkrb5.so.3 (0x00007f382cf7d000)
00:01:42.542  	libk5crypto.so.3 => /usr/lib64/libk5crypto.so.3 (0x00007f382cf63000)
00:01:42.542  	libcom_err.so.2 => /usr/lib64/libcom_err.so.2 (0x00007f382cf5c000)
00:01:42.542  	libkrb5support.so.0 => /usr/lib64/libkrb5support.so.0 (0x00007f382cf4c000)
00:01:42.542  	libkeyutils.so.1 => /usr/lib64/libkeyutils.so.1 (0x00007f382cf45000)
00:01:42.542  	libevent-2.1.so.7 => /usr/lib64/libevent-2.1.so.7 (0x00007f382ceed000)
00:01:42.542  	libsasl2.so.3 => /usr/lib64/libsasl2.so.3 (0x00007f382cece000)
00:01:42.542  	libbrotlicommon.so.1 => /usr/lib64/libbrotlicommon.so.1 (0x00007f382cea9000)
00:01:42.542  	libcrypt.so.2 => /usr/lib64/libcrypt.so.2 (0x00007f382ce70000) == *\n\o\t\ \f\o\u\n\d* ]]
00:01:42.542  + unset -v VFIO_QEMU_BIN
00:01:42.542  + [[ ! -v VFIO_QEMU_BIN ]]
00:01:42.542  + [[ -e /usr/local/qemu/vfio-user-latest ]]
00:01:42.542  + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:01:42.542  + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:01:42.542  + [[ -e /usr/local/qemu/vanilla-latest ]]
00:01:42.542  + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:01:42.542  + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:01:42.542  + spdk/autorun.sh /var/jenkins/workspace/vfio-user-phy-autotest/autorun-spdk.conf
00:01:42.542    09:58:37  -- common/autotest_common.sh@1692 -- $ [[ n == y ]]
00:01:42.542   09:58:37  -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/vfio-user-phy-autotest/autorun-spdk.conf
00:01:42.542    09:58:37  -- vfio-user-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1
00:01:42.542    09:58:37  -- vfio-user-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_VFIOUSER_QEMU=1
00:01:42.542    09:58:37  -- vfio-user-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_RUN_ASAN=1
00:01:42.542    09:58:37  -- vfio-user-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_RUN_UBSAN=1
00:01:42.542    09:58:37  -- vfio-user-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_SMA=1
00:01:42.542    09:58:37  -- vfio-user-phy-autotest/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0
00:01:42.542   09:58:37  -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT
00:01:42.542   09:58:37  -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/vfio-user-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/vfio-user-phy-autotest/autorun-spdk.conf
00:01:42.542     09:58:37  -- common/autotest_common.sh@1692 -- $ [[ n == y ]]
00:01:42.542    09:58:37  -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/common.sh
00:01:42.542     09:58:37  -- scripts/common.sh@15 -- $ shopt -s extglob
00:01:42.542     09:58:37  -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]]
00:01:42.542     09:58:37  -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:01:42.542     09:58:37  -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh
00:01:42.542      09:58:37  -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:01:42.542      09:58:37  -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:01:42.542      09:58:37  -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:01:42.542      09:58:37  -- paths/export.sh@5 -- $ export PATH
00:01:42.542      09:58:37  -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:01:42.542    09:58:37  -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output
00:01:42.542      09:58:37  -- common/autobuild_common.sh@493 -- $ date +%s
00:01:42.542     09:58:37  -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732093117.XXXXXX
00:01:42.542    09:58:37  -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732093117.QIZXiK
00:01:42.542    09:58:37  -- common/autobuild_common.sh@495 -- $ [[ -n '' ]]
00:01:42.542    09:58:37  -- common/autobuild_common.sh@499 -- $ '[' -n '' ']'
00:01:42.542    09:58:37  -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/'
00:01:42.542    09:58:37  -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/vfio-user-phy-autotest/spdk/xnvme --exclude /tmp'
00:01:42.542    09:58:37  -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/vfio-user-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs'
00:01:42.542     09:58:37  -- common/autobuild_common.sh@509 -- $ get_config_params
00:01:42.542     09:58:37  -- common/autotest_common.sh@409 -- $ xtrace_disable
00:01:42.542     09:58:37  -- common/autotest_common.sh@10 -- $ set +x
00:01:42.542    09:58:37  -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-sma --with-crypto'
00:01:42.542    09:58:37  -- common/autobuild_common.sh@511 -- $ start_monitor_resources
00:01:42.542    09:58:37  -- pm/common@17 -- $ local monitor
00:01:42.542    09:58:37  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:01:42.542    09:58:37  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:01:42.542    09:58:37  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:01:42.542     09:58:37  -- pm/common@21 -- $ date +%s
00:01:42.542    09:58:37  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:01:42.542     09:58:37  -- pm/common@21 -- $ date +%s
00:01:42.542    09:58:37  -- pm/common@25 -- $ sleep 1
00:01:42.542     09:58:37  -- pm/common@21 -- $ date +%s
00:01:42.542     09:58:37  -- pm/common@21 -- $ date +%s
00:01:42.542    09:58:37  -- pm/common@21 -- $ /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732093117
00:01:42.543    09:58:37  -- pm/common@21 -- $ /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732093117
00:01:42.543    09:58:37  -- pm/common@21 -- $ /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732093117
00:01:42.543    09:58:37  -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732093117
00:01:42.543  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732093117_collect-cpu-load.pm.log
00:01:42.543  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732093117_collect-vmstat.pm.log
00:01:42.543  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732093117_collect-cpu-temp.pm.log
00:01:42.543  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732093117_collect-bmc-pm.bmc.pm.log
00:01:43.481    09:58:38  -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT
00:01:43.481   09:58:38  -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD=
00:01:43.481   09:58:38  -- spdk/autobuild.sh@12 -- $ umask 022
00:01:43.481   09:58:38  -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:01:43.481   09:58:38  -- spdk/autobuild.sh@16 -- $ date -u
00:01:43.481  Wed Nov 20 08:58:38 AM UTC 2024
00:01:43.481   09:58:38  -- spdk/autobuild.sh@17 -- $ git describe --tags
00:01:43.481  v25.01-pre-212-ga5dab6cf7
00:01:43.481   09:58:38  -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']'
00:01:43.481   09:58:38  -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan'
00:01:43.481   09:58:38  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:01:43.481   09:58:38  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:01:43.481   09:58:38  -- common/autotest_common.sh@10 -- $ set +x
00:01:43.481  ************************************
00:01:43.481  START TEST asan
00:01:43.481  ************************************
00:01:43.481   09:58:38 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan'
00:01:43.481  using asan
00:01:43.481  
00:01:43.481  real	0m0.000s
00:01:43.481  user	0m0.000s
00:01:43.481  sys	0m0.000s
00:01:43.481   09:58:38 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:01:43.481   09:58:38 asan -- common/autotest_common.sh@10 -- $ set +x
00:01:43.481  ************************************
00:01:43.481  END TEST asan
00:01:43.481  ************************************
00:01:43.740   09:58:38  -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']'
00:01:43.740   09:58:38  -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan'
00:01:43.740   09:58:38  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:01:43.740   09:58:38  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:01:43.740   09:58:38  -- common/autotest_common.sh@10 -- $ set +x
00:01:43.740  ************************************
00:01:43.740  START TEST ubsan
00:01:43.740  ************************************
00:01:43.740   09:58:38 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan'
00:01:43.740  using ubsan
00:01:43.740  
00:01:43.740  real	0m0.000s
00:01:43.740  user	0m0.000s
00:01:43.740  sys	0m0.000s
00:01:43.740   09:58:38 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:01:43.740   09:58:38 ubsan -- common/autotest_common.sh@10 -- $ set +x
00:01:43.740  ************************************
00:01:43.740  END TEST ubsan
00:01:43.740  ************************************
00:01:43.740   09:58:38  -- spdk/autobuild.sh@27 -- $ '[' -n '' ']'
00:01:43.740   09:58:38  -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in
00:01:43.740   09:58:38  -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]]
00:01:43.740   09:58:38  -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]]
00:01:43.740   09:58:38  -- spdk/autobuild.sh@55 -- $ [[ -n '' ]]
00:01:43.740   09:58:38  -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]]
00:01:43.740   09:58:38  -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]]
00:01:43.741   09:58:38  -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]]
00:01:43.741   09:58:38  -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/vfio-user-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-sma --with-crypto --with-shared
00:01:43.741  Using default SPDK env in /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk
00:01:43.741  Using default DPDK in /var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/build
00:01:44.000  Using 'verbs' RDMA provider
00:01:54.915  Configuring ISA-L (logfile: /var/jenkins/workspace/vfio-user-phy-autotest/spdk/.spdk-isal.log)...done.
00:02:04.891  Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/vfio-user-phy-autotest/spdk/.spdk-isal-crypto.log)...done.
00:02:05.458  Creating mk/config.mk...done.
00:02:05.458  Creating mk/cc.flags.mk...done.
00:02:05.458  Type 'make' to build.
00:02:05.458   09:59:00  -- spdk/autobuild.sh@70 -- $ run_test make make -j48
00:02:05.458   09:59:00  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:02:05.458   09:59:00  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:02:05.458   09:59:00  -- common/autotest_common.sh@10 -- $ set +x
00:02:05.458  ************************************
00:02:05.458  START TEST make
00:02:05.458  ************************************
00:02:05.458   09:59:00 make -- common/autotest_common.sh@1129 -- $ make -j48
00:02:06.027  make[1]: Nothing to be done for 'all'.
00:02:07.940  The Meson build system
00:02:07.940  Version: 1.5.0
00:02:07.940  Source dir: /var/jenkins/workspace/vfio-user-phy-autotest/spdk/libvfio-user
00:02:07.940  Build dir: /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user/build-debug
00:02:07.940  Build type: native build
00:02:07.940  Project name: libvfio-user
00:02:07.940  Project version: 0.0.1
00:02:07.940  C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)")
00:02:07.940  C linker for the host machine: cc ld.bfd 2.40-14
00:02:07.940  Host machine cpu family: x86_64
00:02:07.940  Host machine cpu: x86_64
00:02:07.940  Run-time dependency threads found: YES
00:02:07.940  Library dl found: YES
00:02:07.940  Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5
00:02:07.940  Run-time dependency json-c found: YES 0.17
00:02:07.940  Run-time dependency cmocka found: YES 1.1.7
00:02:07.940  Program pytest-3 found: NO
00:02:07.940  Program flake8 found: NO
00:02:07.940  Program misspell-fixer found: NO
00:02:07.940  Program restructuredtext-lint found: NO
00:02:07.940  Program valgrind found: YES (/usr/bin/valgrind)
00:02:07.940  Compiler for C supports arguments -Wno-missing-field-initializers: YES 
00:02:07.940  Compiler for C supports arguments -Wmissing-declarations: YES 
00:02:07.940  Compiler for C supports arguments -Wwrite-strings: YES 
00:02:07.940  ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup.
00:02:07.940  Program test-lspci.sh found: YES (/var/jenkins/workspace/vfio-user-phy-autotest/spdk/libvfio-user/test/test-lspci.sh)
00:02:07.940  Program test-linkage.sh found: YES (/var/jenkins/workspace/vfio-user-phy-autotest/spdk/libvfio-user/test/test-linkage.sh)
00:02:07.940  ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup.
00:02:07.940  Build targets in project: 8
00:02:07.940  WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions:
00:02:07.940   * 0.57.0: {'exclude_suites arg in add_test_setup'}
00:02:07.940  
00:02:07.940  libvfio-user 0.0.1
00:02:07.940  
00:02:07.940    User defined options
00:02:07.940      buildtype      : debug
00:02:07.940      default_library: shared
00:02:07.940      libdir         : /usr/local/lib
00:02:07.940  
00:02:07.940  Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja
00:02:08.512  ninja: Entering directory `/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user/build-debug'
00:02:08.772  [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o
00:02:08.772  [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o
00:02:08.772  [3/37] Compiling C object samples/lspci.p/lspci.c.o
00:02:08.772  [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o
00:02:08.772  [5/37] Compiling C object samples/null.p/null.c.o
00:02:08.772  [6/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o
00:02:08.772  [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o
00:02:08.772  [8/37] Compiling C object test/unit_tests.p/mocks.c.o
00:02:08.772  [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o
00:02:08.772  [10/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o
00:02:08.772  [11/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o
00:02:08.772  [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o
00:02:08.772  [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o
00:02:08.772  [14/37] Compiling C object samples/client.p/.._lib_migration.c.o
00:02:08.772  [15/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o
00:02:08.772  [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o
00:02:09.033  [17/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o
00:02:09.033  [18/37] Compiling C object samples/client.p/.._lib_tran.c.o
00:02:09.033  [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o
00:02:09.033  [20/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o
00:02:09.033  [21/37] Compiling C object samples/client.p/client.c.o
00:02:09.033  [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o
00:02:09.033  [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o
00:02:09.033  [24/37] Compiling C object samples/server.p/server.c.o
00:02:09.033  [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o
00:02:09.033  [26/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o
00:02:09.033  [27/37] Linking target samples/client
00:02:09.033  [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o
00:02:09.033  [29/37] Linking target lib/libvfio-user.so.0.0.1
00:02:09.033  [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o
00:02:09.296  [31/37] Linking target test/unit_tests
00:02:09.296  [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols
00:02:09.296  [33/37] Linking target samples/lspci
00:02:09.296  [34/37] Linking target samples/server
00:02:09.296  [35/37] Linking target samples/null
00:02:09.296  [36/37] Linking target samples/gpio-pci-idio-16
00:02:09.296  [37/37] Linking target samples/shadow_ioeventfd_server
00:02:09.296  INFO: autodetecting backend as ninja
00:02:09.296  INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user/build-debug
00:02:09.558  DESTDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user/build-debug
00:02:10.132  ninja: Entering directory `/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user/build-debug'
00:02:10.132  ninja: no work to do.
00:02:48.913  The Meson build system
00:02:48.913  Version: 1.5.0
00:02:48.913  Source dir: /var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk
00:02:48.913  Build dir: /var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/build-tmp
00:02:48.913  Build type: native build
00:02:48.913  Program cat found: YES (/usr/bin/cat)
00:02:48.913  Project name: DPDK
00:02:48.913  Project version: 24.03.0
00:02:48.913  C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)")
00:02:48.913  C linker for the host machine: cc ld.bfd 2.40-14
00:02:48.913  Host machine cpu family: x86_64
00:02:48.913  Host machine cpu: x86_64
00:02:48.913  Message: ## Building in Developer Mode ##
00:02:48.913  Program pkg-config found: YES (/usr/bin/pkg-config)
00:02:48.913  Program check-symbols.sh found: YES (/var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh)
00:02:48.913  Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh)
00:02:48.913  Program python3 found: YES (/usr/bin/python3)
00:02:48.913  Program cat found: YES (/usr/bin/cat)
00:02:48.913  Compiler for C supports arguments -march=native: YES 
00:02:48.913  Checking for size of "void *" : 8 
00:02:48.913  Checking for size of "void *" : 8 (cached)
00:02:48.913  Compiler for C supports link arguments -Wl,--undefined-version: YES 
00:02:48.913  Library m found: YES
00:02:48.913  Library numa found: YES
00:02:48.913  Has header "numaif.h" : YES 
00:02:48.913  Library fdt found: NO
00:02:48.913  Library execinfo found: NO
00:02:48.913  Has header "execinfo.h" : YES 
00:02:48.913  Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5
00:02:48.913  Run-time dependency libarchive found: NO (tried pkgconfig)
00:02:48.913  Run-time dependency libbsd found: NO (tried pkgconfig)
00:02:48.913  Run-time dependency jansson found: NO (tried pkgconfig)
00:02:48.913  Run-time dependency openssl found: YES 3.1.1
00:02:48.913  Run-time dependency libpcap found: YES 1.10.4
00:02:48.913  Has header "pcap.h" with dependency libpcap: YES 
00:02:48.913  Compiler for C supports arguments -Wcast-qual: YES 
00:02:48.913  Compiler for C supports arguments -Wdeprecated: YES 
00:02:48.913  Compiler for C supports arguments -Wformat: YES 
00:02:48.913  Compiler for C supports arguments -Wformat-nonliteral: NO 
00:02:48.913  Compiler for C supports arguments -Wformat-security: NO 
00:02:48.913  Compiler for C supports arguments -Wmissing-declarations: YES 
00:02:48.913  Compiler for C supports arguments -Wmissing-prototypes: YES 
00:02:48.913  Compiler for C supports arguments -Wnested-externs: YES 
00:02:48.914  Compiler for C supports arguments -Wold-style-definition: YES 
00:02:48.914  Compiler for C supports arguments -Wpointer-arith: YES 
00:02:48.914  Compiler for C supports arguments -Wsign-compare: YES 
00:02:48.914  Compiler for C supports arguments -Wstrict-prototypes: YES 
00:02:48.914  Compiler for C supports arguments -Wundef: YES 
00:02:48.914  Compiler for C supports arguments -Wwrite-strings: YES 
00:02:48.914  Compiler for C supports arguments -Wno-address-of-packed-member: YES 
00:02:48.914  Compiler for C supports arguments -Wno-packed-not-aligned: YES 
00:02:48.914  Compiler for C supports arguments -Wno-missing-field-initializers: YES 
00:02:48.914  Compiler for C supports arguments -Wno-zero-length-bounds: YES 
00:02:48.914  Program objdump found: YES (/usr/bin/objdump)
00:02:48.914  Compiler for C supports arguments -mavx512f: YES 
00:02:48.914  Checking if "AVX512 checking" compiles: YES 
00:02:48.914  Fetching value of define "__SSE4_2__" : 1 
00:02:48.914  Fetching value of define "__AES__" : 1 
00:02:48.914  Fetching value of define "__AVX__" : 1 
00:02:48.914  Fetching value of define "__AVX2__" : (undefined) 
00:02:48.914  Fetching value of define "__AVX512BW__" : (undefined) 
00:02:48.914  Fetching value of define "__AVX512CD__" : (undefined) 
00:02:48.914  Fetching value of define "__AVX512DQ__" : (undefined) 
00:02:48.914  Fetching value of define "__AVX512F__" : (undefined) 
00:02:48.914  Fetching value of define "__AVX512VL__" : (undefined) 
00:02:48.914  Fetching value of define "__PCLMUL__" : 1 
00:02:48.914  Fetching value of define "__RDRND__" : 1 
00:02:48.914  Fetching value of define "__RDSEED__" : (undefined) 
00:02:48.914  Fetching value of define "__VPCLMULQDQ__" : (undefined) 
00:02:48.914  Fetching value of define "__znver1__" : (undefined) 
00:02:48.914  Fetching value of define "__znver2__" : (undefined) 
00:02:48.914  Fetching value of define "__znver3__" : (undefined) 
00:02:48.914  Fetching value of define "__znver4__" : (undefined) 
00:02:48.914  Library asan found: YES
00:02:48.914  Compiler for C supports arguments -Wno-format-truncation: YES 
00:02:48.914  Message: lib/log: Defining dependency "log"
00:02:48.914  Message: lib/kvargs: Defining dependency "kvargs"
00:02:48.914  Message: lib/telemetry: Defining dependency "telemetry"
00:02:48.914  Library rt found: YES
00:02:48.914  Checking for function "getentropy" : NO 
00:02:48.914  Message: lib/eal: Defining dependency "eal"
00:02:48.914  Message: lib/ring: Defining dependency "ring"
00:02:48.914  Message: lib/rcu: Defining dependency "rcu"
00:02:48.914  Message: lib/mempool: Defining dependency "mempool"
00:02:48.914  Message: lib/mbuf: Defining dependency "mbuf"
00:02:48.914  Fetching value of define "__PCLMUL__" : 1 (cached)
00:02:48.914  Fetching value of define "__AVX512F__" : (undefined) (cached)
00:02:48.914  Compiler for C supports arguments -mpclmul: YES 
00:02:48.914  Compiler for C supports arguments -maes: YES 
00:02:48.914  Compiler for C supports arguments -mavx512f: YES (cached)
00:02:48.914  Compiler for C supports arguments -mavx512bw: YES 
00:02:48.914  Compiler for C supports arguments -mavx512dq: YES 
00:02:48.914  Compiler for C supports arguments -mavx512vl: YES 
00:02:48.914  Compiler for C supports arguments -mvpclmulqdq: YES 
00:02:48.914  Compiler for C supports arguments -mavx2: YES 
00:02:48.914  Compiler for C supports arguments -mavx: YES 
00:02:48.914  Message: lib/net: Defining dependency "net"
00:02:48.914  Message: lib/meter: Defining dependency "meter"
00:02:48.914  Message: lib/ethdev: Defining dependency "ethdev"
00:02:48.914  Message: lib/pci: Defining dependency "pci"
00:02:48.914  Message: lib/cmdline: Defining dependency "cmdline"
00:02:48.914  Message: lib/hash: Defining dependency "hash"
00:02:48.914  Message: lib/timer: Defining dependency "timer"
00:02:48.914  Message: lib/compressdev: Defining dependency "compressdev"
00:02:48.914  Message: lib/cryptodev: Defining dependency "cryptodev"
00:02:48.914  Message: lib/dmadev: Defining dependency "dmadev"
00:02:48.914  Compiler for C supports arguments -Wno-cast-qual: YES 
00:02:48.914  Message: lib/power: Defining dependency "power"
00:02:48.914  Message: lib/reorder: Defining dependency "reorder"
00:02:48.914  Message: lib/security: Defining dependency "security"
00:02:48.914  Has header "linux/userfaultfd.h" : YES 
00:02:48.914  Has header "linux/vduse.h" : YES 
00:02:48.914  Message: lib/vhost: Defining dependency "vhost"
00:02:48.914  Compiler for C supports arguments -Wno-format-truncation: YES (cached)
00:02:48.914  Message: drivers/bus/auxiliary: Defining dependency "bus_auxiliary"
00:02:48.914  Message: drivers/bus/pci: Defining dependency "bus_pci"
00:02:48.914  Message: drivers/bus/vdev: Defining dependency "bus_vdev"
00:02:48.914  Compiler for C supports arguments -std=c11: YES 
00:02:48.914  Compiler for C supports arguments -Wno-strict-prototypes: YES 
00:02:48.914  Compiler for C supports arguments -D_BSD_SOURCE: YES 
00:02:48.914  Compiler for C supports arguments -D_DEFAULT_SOURCE: YES 
00:02:48.914  Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES 
00:02:48.914  Run-time dependency libmlx5 found: YES 1.24.46.0
00:02:48.914  Run-time dependency libibverbs found: YES 1.14.46.0
00:02:48.914  Library mtcr_ul found: NO
00:02:48.914  Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_ESP" with dependencies libmlx5, libibverbs: YES 
00:02:48.914  Header "infiniband/verbs.h" has symbol "IBV_RX_HASH_IPSEC_SPI" with dependencies libmlx5, libibverbs: YES 
00:02:48.914  Header "infiniband/verbs.h" has symbol "IBV_ACCESS_RELAXED_ORDERING " with dependencies libmlx5, libibverbs: YES 
00:02:48.914  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQE_RES_FORMAT_CSUM_STRIDX" with dependencies libmlx5, libibverbs: YES 
00:02:48.914  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_MASK_TUNNEL_OFFLOADS" with dependencies libmlx5, libibverbs: YES 
00:02:48.914  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_MPW_ALLOWED" with dependencies libmlx5, libibverbs: YES 
00:02:48.914  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_CQE_128B_COMP" with dependencies libmlx5, libibverbs: YES 
00:02:48.914  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQ_INIT_ATTR_FLAGS_CQE_PAD" with dependencies libmlx5, libibverbs: YES 
00:02:48.914  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_flow_action_packet_reformat" with dependencies libmlx5, libibverbs: YES 
00:02:48.914  Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_MPLS" with dependencies libmlx5, libibverbs: YES 
00:02:48.914  Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAGS_PCI_WRITE_END_PADDING" with dependencies libmlx5, libibverbs: YES 
00:02:48.914  Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAG_RX_END_PADDING" with dependencies libmlx5, libibverbs: NO 
00:02:48.914  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_devx_port" with dependencies libmlx5, libibverbs: NO 
00:02:48.914  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_port" with dependencies libmlx5, libibverbs: YES 
00:02:48.914  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_ib_port" with dependencies libmlx5, libibverbs: YES 
00:02:48.914  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_create" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_COUNTERS_DEVX" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_DEFAULT_MISS" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_query_async" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_qp_query" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_pp_alloc" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_devx_tir" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_get_event" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_meter" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "infiniband/mlx5dv.h" has symbol "MLX5_MMAP_GET_NC_PAGES_CMD" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_NIC_RX" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_FDB" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_push_vlan" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_alloc_var" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ENHANCED_MPSW" with dependencies libmlx5, libibverbs: NO 
00:02:51.446  Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_SEND_EN" with dependencies libmlx5, libibverbs: NO 
00:02:51.446  Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_WAIT" with dependencies libmlx5, libibverbs: NO 
00:02:51.446  Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ACCESS_ASO" with dependencies libmlx5, libibverbs: NO 
00:02:51.446  Header "linux/if_link.h" has symbol "IFLA_NUM_VF" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "linux/if_link.h" has symbol "IFLA_EXT_MASK" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "linux/if_link.h" has symbol "IFLA_PHYS_SWITCH_ID" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "linux/if_link.h" has symbol "IFLA_PHYS_PORT_NAME" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "rdma/rdma_netlink.h" has symbol "RDMA_NL_NLDEV" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_GET" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_PORT_GET" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_INDEX" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_NAME" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_INDEX" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_STATE" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_NDEV_INDEX" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_domain" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_sampler" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_set_reclaim_device_memory" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_array" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "linux/devlink.h" has symbol "DEVLINK_GENL_NAME" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_aso" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "infiniband/verbs.h" has symbol "INFINIBAND_VERBS_H" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "infiniband/mlx5dv.h" has symbol "MLX5_WQE_UMR_CTRL_FLAG_INLINE" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_rule" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_ACTION_FLAGS_ASO_CT_DIRECTION_INITIATOR" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_allow_duplicate_rules" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "infiniband/verbs.h" has symbol "ibv_reg_mr_iova" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "infiniband/verbs.h" has symbol "ibv_import_device" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_root_table" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_steering_anchor" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Header "infiniband/verbs.h" has symbol "ibv_is_fork_initialized" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Checking whether type "struct mlx5dv_sw_parsing_caps" has member "sw_parsing_offloads" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Checking whether type "struct ibv_counter_set_init_attr" has member "counter_set_id" with dependencies libmlx5, libibverbs: NO 
00:02:51.446  Checking whether type "struct ibv_counters_init_attr" has member "comp_mask" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Checking whether type "struct mlx5dv_devx_uar" has member "mmap_off" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Checking whether type "struct mlx5dv_flow_matcher_attr" has member "ft_type" with dependencies libmlx5, libibverbs: YES 
00:02:51.446  Configuring mlx5_autoconf.h using configuration
00:02:51.446  Message: drivers/common/mlx5: Defining dependency "common_mlx5"
00:02:51.446  Run-time dependency libcrypto found: YES 3.1.1
00:02:51.446  Library IPSec_MB found: YES
00:02:51.446  Fetching value of define "IMB_VERSION_STR" : "1.5.0" 
00:02:51.446  Message: drivers/common/qat: Defining dependency "common_qat"
00:02:51.446  Message: drivers/mempool/ring: Defining dependency "mempool_ring"
00:02:51.446  Message: Disabling raw/* drivers: missing internal dependency "rawdev"
00:02:51.446  Library IPSec_MB found: YES
00:02:51.446  Fetching value of define "IMB_VERSION_STR" : "1.5.0" (cached)
00:02:51.446  Message: drivers/crypto/ipsec_mb: Defining dependency "crypto_ipsec_mb"
00:02:51.446  Compiler for C supports arguments -std=c11: YES (cached)
00:02:51.446  Compiler for C supports arguments -Wno-strict-prototypes: YES (cached)
00:02:51.446  Compiler for C supports arguments -D_BSD_SOURCE: YES (cached)
00:02:51.446  Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached)
00:02:51.446  Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached)
00:02:51.447  Message: drivers/crypto/mlx5: Defining dependency "crypto_mlx5"
00:02:51.447  Message: Disabling regex/* drivers: missing internal dependency "regexdev"
00:02:51.447  Message: Disabling ml/* drivers: missing internal dependency "mldev"
00:02:51.447  Message: Disabling event/* drivers: missing internal dependency "eventdev"
00:02:51.447  Message: Disabling baseband/* drivers: missing internal dependency "bbdev"
00:02:51.447  Message: Disabling gpu/* drivers: missing internal dependency "gpudev"
00:02:51.447  Program doxygen found: YES (/usr/local/bin/doxygen)
00:02:51.447  Configuring doxy-api-html.conf using configuration
00:02:51.447  Configuring doxy-api-man.conf using configuration
00:02:51.447  Program mandb found: YES (/usr/bin/mandb)
00:02:51.447  Program sphinx-build found: NO
00:02:51.447  Configuring rte_build_config.h using configuration
00:02:51.447  Message: 
00:02:51.447  =================
00:02:51.447  Applications Enabled
00:02:51.447  =================
00:02:51.447  
00:02:51.447  apps:
00:02:51.447  	
00:02:51.447  
00:02:51.447  Message: 
00:02:51.447  =================
00:02:51.447  Libraries Enabled
00:02:51.447  =================
00:02:51.447  
00:02:51.447  libs:
00:02:51.447  	log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 
00:02:51.447  	net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 
00:02:51.447  	cryptodev, dmadev, power, reorder, security, vhost, 
00:02:51.447  
00:02:51.447  Message: 
00:02:51.447  ===============
00:02:51.447  Drivers Enabled
00:02:51.447  ===============
00:02:51.447  
00:02:51.447  common:
00:02:51.447  	mlx5, qat, 
00:02:51.447  bus:
00:02:51.447  	auxiliary, pci, vdev, 
00:02:51.447  mempool:
00:02:51.447  	ring, 
00:02:51.447  dma:
00:02:51.447  	
00:02:51.447  net:
00:02:51.447  	
00:02:51.447  crypto:
00:02:51.447  	ipsec_mb, mlx5, 
00:02:51.447  compress:
00:02:51.447  	
00:02:51.447  vdpa:
00:02:51.447  	
00:02:51.447  
00:02:51.447  Message: 
00:02:51.447  =================
00:02:51.447  Content Skipped
00:02:51.447  =================
00:02:51.447  
00:02:51.447  apps:
00:02:51.447  	dumpcap:	explicitly disabled via build config
00:02:51.447  	graph:	explicitly disabled via build config
00:02:51.447  	pdump:	explicitly disabled via build config
00:02:51.447  	proc-info:	explicitly disabled via build config
00:02:51.447  	test-acl:	explicitly disabled via build config
00:02:51.447  	test-bbdev:	explicitly disabled via build config
00:02:51.447  	test-cmdline:	explicitly disabled via build config
00:02:51.447  	test-compress-perf:	explicitly disabled via build config
00:02:51.447  	test-crypto-perf:	explicitly disabled via build config
00:02:51.447  	test-dma-perf:	explicitly disabled via build config
00:02:51.447  	test-eventdev:	explicitly disabled via build config
00:02:51.447  	test-fib:	explicitly disabled via build config
00:02:51.447  	test-flow-perf:	explicitly disabled via build config
00:02:51.447  	test-gpudev:	explicitly disabled via build config
00:02:51.447  	test-mldev:	explicitly disabled via build config
00:02:51.447  	test-pipeline:	explicitly disabled via build config
00:02:51.447  	test-pmd:	explicitly disabled via build config
00:02:51.447  	test-regex:	explicitly disabled via build config
00:02:51.447  	test-sad:	explicitly disabled via build config
00:02:51.447  	test-security-perf:	explicitly disabled via build config
00:02:51.447  	
00:02:51.447  libs:
00:02:51.447  	argparse:	explicitly disabled via build config
00:02:51.447  	metrics:	explicitly disabled via build config
00:02:51.447  	acl:	explicitly disabled via build config
00:02:51.447  	bbdev:	explicitly disabled via build config
00:02:51.447  	bitratestats:	explicitly disabled via build config
00:02:51.447  	bpf:	explicitly disabled via build config
00:02:51.447  	cfgfile:	explicitly disabled via build config
00:02:51.447  	distributor:	explicitly disabled via build config
00:02:51.447  	efd:	explicitly disabled via build config
00:02:51.447  	eventdev:	explicitly disabled via build config
00:02:51.447  	dispatcher:	explicitly disabled via build config
00:02:51.447  	gpudev:	explicitly disabled via build config
00:02:51.447  	gro:	explicitly disabled via build config
00:02:51.447  	gso:	explicitly disabled via build config
00:02:51.447  	ip_frag:	explicitly disabled via build config
00:02:51.447  	jobstats:	explicitly disabled via build config
00:02:51.447  	latencystats:	explicitly disabled via build config
00:02:51.447  	lpm:	explicitly disabled via build config
00:02:51.447  	member:	explicitly disabled via build config
00:02:51.447  	pcapng:	explicitly disabled via build config
00:02:51.447  	rawdev:	explicitly disabled via build config
00:02:51.447  	regexdev:	explicitly disabled via build config
00:02:51.447  	mldev:	explicitly disabled via build config
00:02:51.447  	rib:	explicitly disabled via build config
00:02:51.447  	sched:	explicitly disabled via build config
00:02:51.447  	stack:	explicitly disabled via build config
00:02:51.447  	ipsec:	explicitly disabled via build config
00:02:51.447  	pdcp:	explicitly disabled via build config
00:02:51.447  	fib:	explicitly disabled via build config
00:02:51.447  	port:	explicitly disabled via build config
00:02:51.447  	pdump:	explicitly disabled via build config
00:02:51.447  	table:	explicitly disabled via build config
00:02:51.447  	pipeline:	explicitly disabled via build config
00:02:51.447  	graph:	explicitly disabled via build config
00:02:51.447  	node:	explicitly disabled via build config
00:02:51.447  	
00:02:51.447  drivers:
00:02:51.447  	common/cpt:	not in enabled drivers build config
00:02:51.447  	common/dpaax:	not in enabled drivers build config
00:02:51.447  	common/iavf:	not in enabled drivers build config
00:02:51.447  	common/idpf:	not in enabled drivers build config
00:02:51.447  	common/ionic:	not in enabled drivers build config
00:02:51.447  	common/mvep:	not in enabled drivers build config
00:02:51.447  	common/octeontx:	not in enabled drivers build config
00:02:51.447  	bus/cdx:	not in enabled drivers build config
00:02:51.447  	bus/dpaa:	not in enabled drivers build config
00:02:51.447  	bus/fslmc:	not in enabled drivers build config
00:02:51.447  	bus/ifpga:	not in enabled drivers build config
00:02:51.447  	bus/platform:	not in enabled drivers build config
00:02:51.447  	bus/uacce:	not in enabled drivers build config
00:02:51.447  	bus/vmbus:	not in enabled drivers build config
00:02:51.447  	common/cnxk:	not in enabled drivers build config
00:02:51.447  	common/nfp:	not in enabled drivers build config
00:02:51.447  	common/nitrox:	not in enabled drivers build config
00:02:51.447  	common/sfc_efx:	not in enabled drivers build config
00:02:51.447  	mempool/bucket:	not in enabled drivers build config
00:02:51.447  	mempool/cnxk:	not in enabled drivers build config
00:02:51.447  	mempool/dpaa:	not in enabled drivers build config
00:02:51.447  	mempool/dpaa2:	not in enabled drivers build config
00:02:51.447  	mempool/octeontx:	not in enabled drivers build config
00:02:51.447  	mempool/stack:	not in enabled drivers build config
00:02:51.447  	dma/cnxk:	not in enabled drivers build config
00:02:51.447  	dma/dpaa:	not in enabled drivers build config
00:02:51.447  	dma/dpaa2:	not in enabled drivers build config
00:02:51.447  	dma/hisilicon:	not in enabled drivers build config
00:02:51.447  	dma/idxd:	not in enabled drivers build config
00:02:51.447  	dma/ioat:	not in enabled drivers build config
00:02:51.447  	dma/skeleton:	not in enabled drivers build config
00:02:51.447  	net/af_packet:	not in enabled drivers build config
00:02:51.447  	net/af_xdp:	not in enabled drivers build config
00:02:51.447  	net/ark:	not in enabled drivers build config
00:02:51.447  	net/atlantic:	not in enabled drivers build config
00:02:51.447  	net/avp:	not in enabled drivers build config
00:02:51.447  	net/axgbe:	not in enabled drivers build config
00:02:51.447  	net/bnx2x:	not in enabled drivers build config
00:02:51.447  	net/bnxt:	not in enabled drivers build config
00:02:51.447  	net/bonding:	not in enabled drivers build config
00:02:51.447  	net/cnxk:	not in enabled drivers build config
00:02:51.447  	net/cpfl:	not in enabled drivers build config
00:02:51.447  	net/cxgbe:	not in enabled drivers build config
00:02:51.447  	net/dpaa:	not in enabled drivers build config
00:02:51.447  	net/dpaa2:	not in enabled drivers build config
00:02:51.447  	net/e1000:	not in enabled drivers build config
00:02:51.447  	net/ena:	not in enabled drivers build config
00:02:51.447  	net/enetc:	not in enabled drivers build config
00:02:51.447  	net/enetfec:	not in enabled drivers build config
00:02:51.447  	net/enic:	not in enabled drivers build config
00:02:51.447  	net/failsafe:	not in enabled drivers build config
00:02:51.447  	net/fm10k:	not in enabled drivers build config
00:02:51.448  	net/gve:	not in enabled drivers build config
00:02:51.448  	net/hinic:	not in enabled drivers build config
00:02:51.448  	net/hns3:	not in enabled drivers build config
00:02:51.448  	net/i40e:	not in enabled drivers build config
00:02:51.448  	net/iavf:	not in enabled drivers build config
00:02:51.448  	net/ice:	not in enabled drivers build config
00:02:51.448  	net/idpf:	not in enabled drivers build config
00:02:51.448  	net/igc:	not in enabled drivers build config
00:02:51.448  	net/ionic:	not in enabled drivers build config
00:02:51.448  	net/ipn3ke:	not in enabled drivers build config
00:02:51.448  	net/ixgbe:	not in enabled drivers build config
00:02:51.448  	net/mana:	not in enabled drivers build config
00:02:51.448  	net/memif:	not in enabled drivers build config
00:02:51.448  	net/mlx4:	not in enabled drivers build config
00:02:51.448  	net/mlx5:	not in enabled drivers build config
00:02:51.448  	net/mvneta:	not in enabled drivers build config
00:02:51.448  	net/mvpp2:	not in enabled drivers build config
00:02:51.448  	net/netvsc:	not in enabled drivers build config
00:02:51.448  	net/nfb:	not in enabled drivers build config
00:02:51.448  	net/nfp:	not in enabled drivers build config
00:02:51.448  	net/ngbe:	not in enabled drivers build config
00:02:51.448  	net/null:	not in enabled drivers build config
00:02:51.448  	net/octeontx:	not in enabled drivers build config
00:02:51.448  	net/octeon_ep:	not in enabled drivers build config
00:02:51.448  	net/pcap:	not in enabled drivers build config
00:02:51.448  	net/pfe:	not in enabled drivers build config
00:02:51.448  	net/qede:	not in enabled drivers build config
00:02:51.448  	net/ring:	not in enabled drivers build config
00:02:51.448  	net/sfc:	not in enabled drivers build config
00:02:51.448  	net/softnic:	not in enabled drivers build config
00:02:51.448  	net/tap:	not in enabled drivers build config
00:02:51.448  	net/thunderx:	not in enabled drivers build config
00:02:51.448  	net/txgbe:	not in enabled drivers build config
00:02:51.448  	net/vdev_netvsc:	not in enabled drivers build config
00:02:51.448  	net/vhost:	not in enabled drivers build config
00:02:51.448  	net/virtio:	not in enabled drivers build config
00:02:51.448  	net/vmxnet3:	not in enabled drivers build config
00:02:51.448  	raw/*:	missing internal dependency, "rawdev"
00:02:51.448  	crypto/armv8:	not in enabled drivers build config
00:02:51.448  	crypto/bcmfs:	not in enabled drivers build config
00:02:51.448  	crypto/caam_jr:	not in enabled drivers build config
00:02:51.448  	crypto/ccp:	not in enabled drivers build config
00:02:51.448  	crypto/cnxk:	not in enabled drivers build config
00:02:51.448  	crypto/dpaa_sec:	not in enabled drivers build config
00:02:51.448  	crypto/dpaa2_sec:	not in enabled drivers build config
00:02:51.448  	crypto/mvsam:	not in enabled drivers build config
00:02:51.448  	crypto/nitrox:	not in enabled drivers build config
00:02:51.448  	crypto/null:	not in enabled drivers build config
00:02:51.448  	crypto/octeontx:	not in enabled drivers build config
00:02:51.448  	crypto/openssl:	not in enabled drivers build config
00:02:51.448  	crypto/scheduler:	not in enabled drivers build config
00:02:51.448  	crypto/uadk:	not in enabled drivers build config
00:02:51.448  	crypto/virtio:	not in enabled drivers build config
00:02:51.448  	compress/isal:	not in enabled drivers build config
00:02:51.448  	compress/mlx5:	not in enabled drivers build config
00:02:51.448  	compress/nitrox:	not in enabled drivers build config
00:02:51.448  	compress/octeontx:	not in enabled drivers build config
00:02:51.448  	compress/zlib:	not in enabled drivers build config
00:02:51.448  	regex/*:	missing internal dependency, "regexdev"
00:02:51.448  	ml/*:	missing internal dependency, "mldev"
00:02:51.448  	vdpa/ifc:	not in enabled drivers build config
00:02:51.448  	vdpa/mlx5:	not in enabled drivers build config
00:02:51.448  	vdpa/nfp:	not in enabled drivers build config
00:02:51.448  	vdpa/sfc:	not in enabled drivers build config
00:02:51.448  	event/*:	missing internal dependency, "eventdev"
00:02:51.448  	baseband/*:	missing internal dependency, "bbdev"
00:02:51.448  	gpu/*:	missing internal dependency, "gpudev"
00:02:51.448  	
00:02:51.448  
00:02:52.015  Build targets in project: 107
00:02:52.015  
00:02:52.015  DPDK 24.03.0
00:02:52.015  
00:02:52.015    User defined options
00:02:52.015      buildtype          : debug
00:02:52.015      default_library    : shared
00:02:52.015      libdir             : lib
00:02:52.015      prefix             : /var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/build
00:02:52.015      b_sanitize         : address
00:02:52.015      c_args             : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -I/var/jenkins/workspace/vfio-user-phy-autotest/spdk/intel-ipsec-mb/lib -DNO_COMPAT_IMB_API_053 -fPIC -Werror 
00:02:52.015      c_link_args        : -L/var/jenkins/workspace/vfio-user-phy-autotest/spdk/intel-ipsec-mb/lib
00:02:52.015      cpu_instruction_set: native
00:02:52.015      disable_apps       : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test
00:02:52.015      disable_libs       : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table
00:02:52.015      enable_docs        : false
00:02:52.015      enable_drivers     : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm,crypto/qat,compress/qat,common/qat,common/mlx5,bus/auxiliary,crypto,crypto/aesni_mb,crypto/mlx5,crypto/ipsec_mb
00:02:52.015      enable_kmods       : false
00:02:52.015      max_lcores         : 128
00:02:52.015      tests              : false
00:02:52.015  
00:02:52.015  Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja
00:02:52.585  ninja: Entering directory `/var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/build-tmp'
00:02:52.585  [1/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o
00:02:52.585  [2/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o
00:02:52.585  [3/363] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o
00:02:52.585  [4/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o
00:02:52.585  [5/363] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o
00:02:52.585  [6/363] Compiling C object lib/librte_log.a.p/log_log_linux.c.o
00:02:52.585  [7/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o
00:02:52.585  [8/363] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o
00:02:52.585  [9/363] Linking static target lib/librte_kvargs.a
00:02:52.585  [10/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o
00:02:52.585  [11/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o
00:02:52.585  [12/363] Compiling C object lib/librte_log.a.p/log_log.c.o
00:02:52.585  [13/363] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o
00:02:52.585  [14/363] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o
00:02:52.848  [15/363] Linking static target lib/librte_log.a
00:02:52.848  [16/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o
00:02:53.416  [17/363] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output)
00:02:53.416  [18/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o
00:02:53.416  [19/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o
00:02:53.416  [20/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o
00:02:53.416  [21/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o
00:02:53.416  [22/363] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o
00:02:53.416  [23/363] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o
00:02:53.416  [24/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o
00:02:53.416  [25/363] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o
00:02:53.416  [26/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o
00:02:53.416  [27/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o
00:02:53.416  [28/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o
00:02:53.416  [29/363] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o
00:02:53.416  [30/363] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o
00:02:53.416  [31/363] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o
00:02:53.416  [32/363] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o
00:02:53.416  [33/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o
00:02:53.416  [34/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o
00:02:53.682  [35/363] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o
00:02:53.682  [36/363] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o
00:02:53.682  [37/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o
00:02:53.682  [38/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o
00:02:53.682  [39/363] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o
00:02:53.682  [40/363] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o
00:02:53.682  [41/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o
00:02:53.682  [42/363] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o
00:02:53.682  [43/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o
00:02:53.682  [44/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o
00:02:53.682  [45/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o
00:02:53.682  [46/363] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o
00:02:53.682  [47/363] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o
00:02:53.682  [48/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o
00:02:53.682  [49/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o
00:02:53.682  [50/363] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o
00:02:53.682  [51/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o
00:02:53.682  [52/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o
00:02:53.682  [53/363] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output)
00:02:53.682  [54/363] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o
00:02:53.682  [55/363] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o
00:02:53.682  [56/363] Linking static target lib/librte_telemetry.a
00:02:53.682  [57/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o
00:02:53.682  [58/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o
00:02:53.682  [59/363] Linking target lib/librte_log.so.24.1
00:02:53.682  [60/363] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o
00:02:53.682  [61/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o
00:02:53.682  [62/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o
00:02:53.682  [63/363] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o
00:02:53.682  [64/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o
00:02:53.941  [65/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o
00:02:53.941  [66/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o
00:02:53.941  [67/363] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols
00:02:53.941  [68/363] Linking target lib/librte_kvargs.so.24.1
00:02:54.202  [69/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o
00:02:54.202  [70/363] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols
00:02:54.202  [71/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o
00:02:54.202  [72/363] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o
00:02:54.202  [73/363] Linking static target lib/librte_pci.a
00:02:54.466  [74/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o
00:02:54.466  [75/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o
00:02:54.466  [76/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o
00:02:54.466  [77/363] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o
00:02:54.466  [78/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o
00:02:54.466  [79/363] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o
00:02:54.466  [80/363] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o
00:02:54.466  [81/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o
00:02:54.466  [82/363] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o
00:02:54.466  [83/363] Linking static target lib/net/libnet_crc_avx512_lib.a
00:02:54.466  [84/363] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o
00:02:54.466  [85/363] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o
00:02:54.466  [86/363] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o
00:02:54.466  [87/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o
00:02:54.466  [88/363] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o
00:02:54.466  [89/363] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o
00:02:54.466  [90/363] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o
00:02:54.466  [91/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o
00:02:54.466  [92/363] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o
00:02:54.466  [93/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o
00:02:54.466  [94/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o
00:02:54.466  [95/363] Linking static target lib/librte_ring.a
00:02:54.466  [96/363] Linking static target lib/librte_meter.a
00:02:54.466  [97/363] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o
00:02:54.466  [98/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o
00:02:54.466  [99/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o
00:02:54.731  [100/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o
00:02:54.731  [101/363] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o
00:02:54.731  [102/363] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o
00:02:54.731  [103/363] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output)
00:02:54.731  [104/363] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o
00:02:54.731  [105/363] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o
00:02:54.731  [106/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o
00:02:54.731  [107/363] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output)
00:02:54.731  [108/363] Linking target lib/librte_telemetry.so.24.1
00:02:54.731  [109/363] Compiling C object lib/librte_net.a.p/net_rte_net.c.o
00:02:54.731  [110/363] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o
00:02:54.731  [111/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o
00:02:54.731  [112/363] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o
00:02:54.731  [113/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o
00:02:54.731  [114/363] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o
00:02:54.991  [115/363] Compiling C object lib/librte_power.a.p/power_power_common.c.o
00:02:54.991  [116/363] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o
00:02:54.991  [117/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o
00:02:54.991  [118/363] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o
00:02:54.991  [119/363] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o
00:02:54.991  [120/363] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o
00:02:54.991  [121/363] Linking static target lib/librte_mempool.a
00:02:54.991  [122/363] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o
00:02:54.991  [123/363] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output)
00:02:54.991  [124/363] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o
00:02:54.991  [125/363] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols
00:02:54.991  [126/363] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o
00:02:54.991  [127/363] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o
00:02:54.991  [128/363] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o
00:02:54.991  [129/363] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o
00:02:55.253  [130/363] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o
00:02:55.253  [131/363] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output)
00:02:55.253  [132/363] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o
00:02:55.253  [133/363] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o
00:02:55.253  [134/363] Linking static target lib/librte_rcu.a
00:02:55.253  [135/363] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_params.c.o
00:02:55.514  [136/363] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o
00:02:55.514  [137/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o
00:02:55.514  [138/363] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o
00:02:55.514  [139/363] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o
00:02:55.514  [140/363] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o
00:02:55.514  [141/363] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o
00:02:55.514  [142/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o
00:02:55.514  [143/363] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o
00:02:55.514  [144/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o
00:02:55.514  [145/363] Linking static target lib/librte_cmdline.a
00:02:55.514  [146/363] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o
00:02:55.514  [147/363] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o
00:02:55.514  [148/363] Linking static target lib/librte_eal.a
00:02:55.514  [149/363] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o
00:02:55.777  [150/363] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o
00:02:55.777  [151/363] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o
00:02:55.777  [152/363] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o
00:02:55.777  [153/363] Linking static target lib/librte_timer.a
00:02:55.777  [154/363] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output)
00:02:55.777  [155/363] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o
00:02:55.777  [156/363] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o
00:02:55.777  [157/363] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o
00:02:55.777  [158/363] Compiling C object lib/librte_power.a.p/power_rte_power.c.o
00:02:56.049  [159/363] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o
00:02:56.049  [160/363] Linking static target lib/librte_dmadev.a
00:02:56.049  [161/363] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_linux_auxiliary.c.o
00:02:56.049  [162/363] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o
00:02:56.049  [163/363] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_common.c.o
00:02:56.049  [164/363] Linking static target drivers/libtmp_rte_bus_auxiliary.a
00:02:56.049  [165/363] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output)
00:02:56.049  [166/363] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o
00:02:56.308  [167/363] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o
00:02:56.308  [168/363] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output)
00:02:56.308  [169/363] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o
00:02:56.308  [170/363] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o
00:02:56.308  [171/363] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o
00:02:56.308  [172/363] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o
00:02:56.571  [173/363] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o
00:02:56.571  [174/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_logs.c.o
00:02:56.571  [175/363] Linking static target lib/librte_net.a
00:02:56.571  [176/363] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o
00:02:56.571  [177/363] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o
00:02:56.571  [178/363] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o
00:02:56.571  [179/363] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o
00:02:56.571  [180/363] Generating drivers/rte_bus_auxiliary.pmd.c with a custom command
00:02:56.571  [181/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_glue.c.o
00:02:56.571  [182/363] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o
00:02:56.571  [183/363] Compiling C object drivers/librte_bus_auxiliary.a.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o
00:02:56.571  [184/363] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o
00:02:56.571  [185/363] Linking static target drivers/librte_bus_auxiliary.a
00:02:56.571  [186/363] Compiling C object drivers/librte_bus_auxiliary.so.24.1.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o
00:02:56.571  [187/363] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o
00:02:56.571  [188/363] Linking static target drivers/libtmp_rte_bus_vdev.a
00:02:56.571  [189/363] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o
00:02:56.835  [190/363] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:56.835  [191/363] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o
00:02:56.835  [192/363] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o
00:02:56.835  [193/363] Linking static target drivers/libtmp_rte_bus_pci.a
00:02:56.835  [194/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mp.c.o
00:02:56.835  [195/363] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output)
00:02:56.835  [196/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_malloc.c.o
00:02:56.835  [197/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_pci.c.o
00:02:57.096  [198/363] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o
00:02:57.096  [199/363] Linking static target lib/librte_power.a
00:02:57.096  [200/363] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output)
00:02:57.097  [201/363] Generating drivers/rte_bus_vdev.pmd.c with a custom command
00:02:57.097  [202/363] Generating drivers/rte_bus_auxiliary.sym_chk with a custom command (wrapped by meson to capture output)
00:02:57.097  [203/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_auxiliary.c.o
00:02:57.097  [204/363] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:02:57.097  [205/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_devx.c.o
00:02:57.097  [206/363] Linking static target drivers/librte_bus_vdev.a
00:02:57.097  [207/363] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:02:57.097  [208/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common.c.o
00:02:57.097  [209/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_utils.c.o
00:02:57.359  [210/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_common.c.o
00:02:57.359  [211/363] Generating drivers/rte_bus_pci.pmd.c with a custom command
00:02:57.359  [212/363] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o
00:02:57.359  [213/363] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o
00:02:57.359  [214/363] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:02:57.359  [215/363] Linking static target lib/librte_hash.a
00:02:57.359  [216/363] Linking static target drivers/librte_bus_pci.a
00:02:57.359  [217/363] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:02:57.359  [218/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_verbs.c.o
00:02:57.359  [219/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_nl.c.o
00:02:57.359  [220/363] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o
00:02:57.359  [221/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_pf2vf.c.o
00:02:57.359  [222/363] Linking static target lib/librte_compressdev.a
00:02:57.359  [223/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen2.c.o
00:02:57.359  [224/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen3.c.o
00:02:57.624  [225/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen1.c.o
00:02:57.624  [226/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen5.c.o
00:02:57.624  [227/363] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:57.624  [228/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_device.c.o
00:02:57.624  [229/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen4.c.o
00:02:57.624  [230/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_os.c.o
00:02:57.624  [231/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen_lce.c.o
00:02:57.624  [232/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen2.c.o
00:02:57.624  [233/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen1.c.o
00:02:57.624  [234/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen3.c.o
00:02:57.883  [235/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen5.c.o
00:02:57.883  [236/363] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o
00:02:57.883  [237/363] Linking static target lib/librte_reorder.a
00:02:57.883  [238/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen4.c.o
00:02:57.883  [239/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp_pmd.c.o
00:02:57.883  [240/363] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o
00:02:57.883  [241/363] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output)
00:02:58.142  [242/363] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output)
00:02:58.142  [243/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_crypto.c.o
00:02:58.142  [244/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_asym_pmd_gen1.c.o
00:02:58.142  [245/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym.c.o
00:02:58.142  [246/363] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:58.142  [247/363] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output)
00:02:58.142  [248/363] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output)
00:02:58.142  [249/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen2.c.o
00:02:58.401  [250/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen5.c.o
00:02:58.401  [251/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_ops.c.o
00:02:58.660  [252/363] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_dek.c.o
00:02:58.920  [253/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_devx_cmds.c.o
00:02:58.920  [254/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_private.c.o
00:02:58.920  [255/363] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto.c.o
00:02:58.920  [256/363] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o
00:02:58.920  [257/363] Linking static target drivers/libtmp_rte_mempool_ring.a
00:02:58.920  [258/363] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_gcm.c.o
00:02:59.179  [259/363] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_xts.c.o
00:02:59.179  [260/363] Linking static target drivers/libtmp_rte_crypto_mlx5.a
00:02:59.179  [261/363] Generating drivers/rte_mempool_ring.pmd.c with a custom command
00:02:59.180  [262/363] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:02:59.180  [263/363] Linking static target drivers/librte_mempool_ring.a
00:02:59.180  [264/363] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:02:59.180  [265/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen_lce.c.o
00:02:59.449  [266/363] Generating drivers/rte_crypto_mlx5.pmd.c with a custom command
00:02:59.449  [267/363] Compiling C object lib/librte_security.a.p/security_rte_security.c.o
00:02:59.449  [268/363] Linking static target lib/librte_security.a
00:02:59.449  [269/363] Compiling C object drivers/librte_crypto_mlx5.so.24.1.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o
00:02:59.449  [270/363] Compiling C object drivers/librte_crypto_mlx5.a.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o
00:02:59.449  [271/363] Linking static target drivers/librte_crypto_mlx5.a
00:02:59.743  [272/363] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o
00:02:59.743  [273/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mr.c.o
00:02:59.743  [274/363] Linking static target drivers/libtmp_rte_common_mlx5.a
00:02:59.743  [275/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_qp.c.o
00:02:59.743  [276/363] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output)
00:03:00.002  [277/363] Generating drivers/rte_common_mlx5.pmd.c with a custom command
00:03:00.002  [278/363] Compiling C object drivers/librte_common_mlx5.so.24.1.p/meson-generated_.._rte_common_mlx5.pmd.c.o
00:03:00.002  [279/363] Compiling C object drivers/librte_common_mlx5.a.p/meson-generated_.._rte_common_mlx5.pmd.c.o
00:03:00.002  [280/363] Linking static target drivers/librte_common_mlx5.a
00:03:00.262  [281/363] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o
00:03:00.262  [282/363] Linking static target lib/librte_mbuf.a
00:03:00.520  [283/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_chacha_poly.c.o
00:03:00.520  [284/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_gcm.c.o
00:03:00.520  [285/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_zuc.c.o
00:03:01.088  [286/363] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output)
00:03:01.088  [287/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_kasumi.c.o
00:03:01.346  [288/363] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o
00:03:01.346  [289/363] Linking static target lib/librte_cryptodev.a
00:03:01.605  [290/363] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o
00:03:01.864  [291/363] Linking static target lib/librte_ethdev.a
00:03:01.864  [292/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_mb.c.o
00:03:02.122  [293/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_snow3g.c.o
00:03:02.122  [294/363] Linking static target drivers/libtmp_rte_crypto_ipsec_mb.a
00:03:02.381  [295/363] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:02.639  [296/363] Generating drivers/rte_crypto_ipsec_mb.pmd.c with a custom command
00:03:02.639  [297/363] Compiling C object drivers/librte_crypto_ipsec_mb.a.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o
00:03:02.639  [298/363] Compiling C object drivers/librte_crypto_ipsec_mb.so.24.1.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o
00:03:02.639  [299/363] Linking static target drivers/librte_crypto_ipsec_mb.a
00:03:02.897  [300/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp.c.o
00:03:02.897  [301/363] Generating drivers/rte_common_mlx5.sym_chk with a custom command (wrapped by meson to capture output)
00:03:03.155  [302/363] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output)
00:03:03.155  [303/363] Linking target lib/librte_eal.so.24.1
00:03:03.412  [304/363] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols
00:03:03.412  [305/363] Linking target lib/librte_ring.so.24.1
00:03:03.412  [306/363] Linking target lib/librte_timer.so.24.1
00:03:03.413  [307/363] Linking target drivers/librte_bus_auxiliary.so.24.1
00:03:03.413  [308/363] Linking target lib/librte_dmadev.so.24.1
00:03:03.413  [309/363] Linking target lib/librte_meter.so.24.1
00:03:03.413  [310/363] Linking target lib/librte_pci.so.24.1
00:03:03.413  [311/363] Linking target drivers/librte_bus_vdev.so.24.1
00:03:03.413  [312/363] Generating symbol file drivers/librte_bus_auxiliary.so.24.1.p/librte_bus_auxiliary.so.24.1.symbols
00:03:03.413  [313/363] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols
00:03:03.413  [314/363] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols
00:03:03.413  [315/363] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols
00:03:03.413  [316/363] Generating symbol file drivers/librte_bus_vdev.so.24.1.p/librte_bus_vdev.so.24.1.symbols
00:03:03.413  [317/363] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols
00:03:03.413  [318/363] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols
00:03:03.670  [319/363] Linking target lib/librte_rcu.so.24.1
00:03:03.670  [320/363] Linking target lib/librte_mempool.so.24.1
00:03:03.670  [321/363] Linking target drivers/librte_bus_pci.so.24.1
00:03:03.670  [322/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen4.c.o
00:03:03.670  [323/363] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols
00:03:03.670  [324/363] Generating symbol file drivers/librte_bus_pci.so.24.1.p/librte_bus_pci.so.24.1.symbols
00:03:03.670  [325/363] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols
00:03:03.670  [326/363] Linking target drivers/librte_mempool_ring.so.24.1
00:03:03.670  [327/363] Linking target lib/librte_mbuf.so.24.1
00:03:03.929  [328/363] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols
00:03:03.929  [329/363] Linking target lib/librte_reorder.so.24.1
00:03:03.929  [330/363] Linking target lib/librte_compressdev.so.24.1
00:03:03.929  [331/363] Linking target lib/librte_net.so.24.1
00:03:03.929  [332/363] Linking target lib/librte_cryptodev.so.24.1
00:03:04.193  [333/363] Generating symbol file lib/librte_compressdev.so.24.1.p/librte_compressdev.so.24.1.symbols
00:03:04.193  [334/363] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols
00:03:04.193  [335/363] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols
00:03:04.193  [336/363] Linking target lib/librte_cmdline.so.24.1
00:03:04.193  [337/363] Linking target lib/librte_security.so.24.1
00:03:04.193  [338/363] Linking target lib/librte_hash.so.24.1
00:03:04.193  [339/363] Generating symbol file lib/librte_security.so.24.1.p/librte_security.so.24.1.symbols
00:03:04.193  [340/363] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols
00:03:04.450  [341/363] Linking target drivers/librte_common_mlx5.so.24.1
00:03:04.450  [342/363] Generating symbol file drivers/librte_common_mlx5.so.24.1.p/librte_common_mlx5.so.24.1.symbols
00:03:04.450  [343/363] Linking target drivers/librte_crypto_mlx5.so.24.1
00:03:04.450  [344/363] Linking target drivers/librte_crypto_ipsec_mb.so.24.1
00:03:04.707  [345/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym_session.c.o
00:03:04.707  [346/363] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o
00:03:06.089  [347/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_sym_pmd_gen1.c.o
00:03:06.089  [348/363] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:06.361  [349/363] Linking target lib/librte_ethdev.so.24.1
00:03:06.361  [350/363] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols
00:03:06.361  [351/363] Linking target lib/librte_power.so.24.1
00:03:07.293  [352/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen3.c.o
00:03:29.221  [353/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_asym.c.o
00:03:29.221  [354/363] Linking static target drivers/libtmp_rte_common_qat.a
00:03:29.221  [355/363] Generating drivers/rte_common_qat.pmd.c with a custom command
00:03:29.221  [356/363] Compiling C object drivers/librte_common_qat.so.24.1.p/meson-generated_.._rte_common_qat.pmd.c.o
00:03:29.221  [357/363] Compiling C object drivers/librte_common_qat.a.p/meson-generated_.._rte_common_qat.pmd.c.o
00:03:29.221  [358/363] Linking static target drivers/librte_common_qat.a
00:03:29.221  [359/363] Linking target drivers/librte_common_qat.so.24.1
00:03:29.790  [360/363] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o
00:03:29.790  [361/363] Linking static target lib/librte_vhost.a
00:03:31.166  [362/363] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output)
00:03:31.166  [363/363] Linking target lib/librte_vhost.so.24.1
00:03:31.166  INFO: autodetecting backend as ninja
00:03:31.166  INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/build-tmp -j 48
00:03:32.101    CC lib/log/log.o
00:03:32.101    CC lib/log/log_flags.o
00:03:32.101    CC lib/log/log_deprecated.o
00:03:32.101    CC lib/ut_mock/mock.o
00:03:32.358    CC lib/ut/ut.o
00:03:32.358    LIB libspdk_ut_mock.a
00:03:32.358    LIB libspdk_ut.a
00:03:32.358    LIB libspdk_log.a
00:03:32.358    SO libspdk_ut_mock.so.6.0
00:03:32.358    SO libspdk_ut.so.2.0
00:03:32.358    SO libspdk_log.so.7.1
00:03:32.616    SYMLINK libspdk_ut_mock.so
00:03:32.616    SYMLINK libspdk_ut.so
00:03:32.616    SYMLINK libspdk_log.so
00:03:32.616    CC lib/dma/dma.o
00:03:32.616    CXX lib/trace_parser/trace.o
00:03:32.616    CC lib/util/base64.o
00:03:32.616    CC lib/ioat/ioat.o
00:03:32.616    CC lib/util/bit_array.o
00:03:32.617    CC lib/util/cpuset.o
00:03:32.617    CC lib/util/crc16.o
00:03:32.617    CC lib/util/crc32.o
00:03:32.617    CC lib/util/crc32c.o
00:03:32.617    CC lib/util/crc32_ieee.o
00:03:32.617    CC lib/util/crc64.o
00:03:32.617    CC lib/util/dif.o
00:03:32.617    CC lib/util/fd.o
00:03:32.617    CC lib/util/fd_group.o
00:03:32.617    CC lib/util/file.o
00:03:32.617    CC lib/util/hexlify.o
00:03:32.617    CC lib/util/iov.o
00:03:32.617    CC lib/util/math.o
00:03:32.617    CC lib/util/net.o
00:03:32.617    CC lib/util/pipe.o
00:03:32.617    CC lib/util/string.o
00:03:32.617    CC lib/util/strerror_tls.o
00:03:32.617    CC lib/util/uuid.o
00:03:32.617    CC lib/util/xor.o
00:03:32.617    CC lib/util/zipf.o
00:03:32.617    CC lib/util/md5.o
00:03:32.874    CC lib/vfio_user/host/vfio_user_pci.o
00:03:32.874    CC lib/vfio_user/host/vfio_user.o
00:03:32.874    LIB libspdk_dma.a
00:03:32.874    SO libspdk_dma.so.5.0
00:03:32.874    SYMLINK libspdk_dma.so
00:03:33.133    LIB libspdk_ioat.a
00:03:33.133    SO libspdk_ioat.so.7.0
00:03:33.133    SYMLINK libspdk_ioat.so
00:03:33.133    LIB libspdk_vfio_user.a
00:03:33.133    SO libspdk_vfio_user.so.5.0
00:03:33.133    SYMLINK libspdk_vfio_user.so
00:03:33.390    LIB libspdk_util.a
00:03:33.390    SO libspdk_util.so.10.1
00:03:33.647    SYMLINK libspdk_util.so
00:03:33.905    LIB libspdk_trace_parser.a
00:03:33.905    CC lib/idxd/idxd.o
00:03:33.905    CC lib/env_dpdk/env.o
00:03:33.905    CC lib/json/json_parse.o
00:03:33.905    CC lib/conf/conf.o
00:03:33.905    CC lib/idxd/idxd_user.o
00:03:33.905    CC lib/env_dpdk/memory.o
00:03:33.906    CC lib/json/json_util.o
00:03:33.906    CC lib/vmd/vmd.o
00:03:33.906    CC lib/rdma_utils/rdma_utils.o
00:03:33.906    CC lib/idxd/idxd_kernel.o
00:03:33.906    CC lib/vmd/led.o
00:03:33.906    CC lib/env_dpdk/pci.o
00:03:33.906    CC lib/json/json_write.o
00:03:33.906    CC lib/env_dpdk/init.o
00:03:33.906    CC lib/env_dpdk/threads.o
00:03:33.906    CC lib/env_dpdk/pci_ioat.o
00:03:33.906    CC lib/env_dpdk/pci_virtio.o
00:03:33.906    CC lib/env_dpdk/pci_vmd.o
00:03:33.906    CC lib/env_dpdk/pci_idxd.o
00:03:33.906    CC lib/env_dpdk/pci_event.o
00:03:33.906    CC lib/env_dpdk/sigbus_handler.o
00:03:33.906    CC lib/env_dpdk/pci_dpdk.o
00:03:33.906    CC lib/env_dpdk/pci_dpdk_2207.o
00:03:33.906    CC lib/env_dpdk/pci_dpdk_2211.o
00:03:33.906    SO libspdk_trace_parser.so.6.0
00:03:33.906    SYMLINK libspdk_trace_parser.so
00:03:34.164    LIB libspdk_conf.a
00:03:34.164    SO libspdk_conf.so.6.0
00:03:34.164    LIB libspdk_rdma_utils.a
00:03:34.164    SYMLINK libspdk_conf.so
00:03:34.164    SO libspdk_rdma_utils.so.1.0
00:03:34.164    LIB libspdk_json.a
00:03:34.164    SO libspdk_json.so.6.0
00:03:34.164    SYMLINK libspdk_rdma_utils.so
00:03:34.423    SYMLINK libspdk_json.so
00:03:34.423    CC lib/rdma_provider/common.o
00:03:34.423    CC lib/rdma_provider/rdma_provider_verbs.o
00:03:34.423    CC lib/jsonrpc/jsonrpc_server.o
00:03:34.423    CC lib/jsonrpc/jsonrpc_server_tcp.o
00:03:34.423    CC lib/jsonrpc/jsonrpc_client_tcp.o
00:03:34.423    CC lib/jsonrpc/jsonrpc_client.o
00:03:34.680    LIB libspdk_rdma_provider.a
00:03:34.680    LIB libspdk_idxd.a
00:03:34.680    SO libspdk_rdma_provider.so.7.0
00:03:34.680    SO libspdk_idxd.so.12.1
00:03:34.680    LIB libspdk_vmd.a
00:03:34.680    SYMLINK libspdk_rdma_provider.so
00:03:34.680    SO libspdk_vmd.so.6.0
00:03:34.680    SYMLINK libspdk_idxd.so
00:03:34.680    LIB libspdk_jsonrpc.a
00:03:34.938    SYMLINK libspdk_vmd.so
00:03:34.939    SO libspdk_jsonrpc.so.6.0
00:03:34.939    SYMLINK libspdk_jsonrpc.so
00:03:34.939    CC lib/rpc/rpc.o
00:03:35.197    LIB libspdk_rpc.a
00:03:35.455    SO libspdk_rpc.so.6.0
00:03:35.455    SYMLINK libspdk_rpc.so
00:03:35.455    CC lib/trace/trace.o
00:03:35.455    CC lib/trace/trace_flags.o
00:03:35.455    CC lib/trace/trace_rpc.o
00:03:35.455    CC lib/notify/notify.o
00:03:35.455    CC lib/keyring/keyring.o
00:03:35.455    CC lib/notify/notify_rpc.o
00:03:35.455    CC lib/keyring/keyring_rpc.o
00:03:35.713    LIB libspdk_notify.a
00:03:35.713    SO libspdk_notify.so.6.0
00:03:35.713    SYMLINK libspdk_notify.so
00:03:35.972    LIB libspdk_keyring.a
00:03:35.972    SO libspdk_keyring.so.2.0
00:03:35.972    LIB libspdk_trace.a
00:03:35.972    SO libspdk_trace.so.11.0
00:03:35.972    SYMLINK libspdk_keyring.so
00:03:35.972    SYMLINK libspdk_trace.so
00:03:36.230    CC lib/thread/thread.o
00:03:36.230    CC lib/thread/iobuf.o
00:03:36.230    CC lib/sock/sock.o
00:03:36.230    CC lib/sock/sock_rpc.o
00:03:36.797    LIB libspdk_sock.a
00:03:36.797    SO libspdk_sock.so.10.0
00:03:36.797    SYMLINK libspdk_sock.so
00:03:36.797    LIB libspdk_env_dpdk.a
00:03:36.797    CC lib/nvme/nvme_ctrlr_cmd.o
00:03:36.797    CC lib/nvme/nvme_fabric.o
00:03:36.797    CC lib/nvme/nvme_ctrlr.o
00:03:36.797    CC lib/nvme/nvme_ns_cmd.o
00:03:36.797    CC lib/nvme/nvme_ns.o
00:03:36.797    CC lib/nvme/nvme_pcie_common.o
00:03:36.797    CC lib/nvme/nvme_pcie.o
00:03:36.797    CC lib/nvme/nvme_qpair.o
00:03:36.797    CC lib/nvme/nvme.o
00:03:36.797    CC lib/nvme/nvme_quirks.o
00:03:36.797    CC lib/nvme/nvme_transport.o
00:03:36.797    CC lib/nvme/nvme_discovery.o
00:03:36.797    CC lib/nvme/nvme_ctrlr_ocssd_cmd.o
00:03:36.797    CC lib/nvme/nvme_ns_ocssd_cmd.o
00:03:36.797    CC lib/nvme/nvme_tcp.o
00:03:36.797    CC lib/nvme/nvme_opal.o
00:03:36.797    CC lib/nvme/nvme_io_msg.o
00:03:36.797    CC lib/nvme/nvme_poll_group.o
00:03:36.797    CC lib/nvme/nvme_zns.o
00:03:36.797    CC lib/nvme/nvme_stubs.o
00:03:36.797    CC lib/nvme/nvme_auth.o
00:03:36.797    CC lib/nvme/nvme_cuse.o
00:03:36.797    CC lib/nvme/nvme_vfio_user.o
00:03:36.797    CC lib/nvme/nvme_rdma.o
00:03:37.055    SO libspdk_env_dpdk.so.15.1
00:03:37.314    SYMLINK libspdk_env_dpdk.so
00:03:38.252    LIB libspdk_thread.a
00:03:38.252    SO libspdk_thread.so.11.0
00:03:38.252    SYMLINK libspdk_thread.so
00:03:38.511    CC lib/virtio/virtio.o
00:03:38.511    CC lib/vfu_tgt/tgt_endpoint.o
00:03:38.512    CC lib/init/json_config.o
00:03:38.512    CC lib/accel/accel.o
00:03:38.512    CC lib/vfu_tgt/tgt_rpc.o
00:03:38.512    CC lib/accel/accel_rpc.o
00:03:38.512    CC lib/init/subsystem.o
00:03:38.512    CC lib/virtio/virtio_vhost_user.o
00:03:38.512    CC lib/init/subsystem_rpc.o
00:03:38.512    CC lib/accel/accel_sw.o
00:03:38.512    CC lib/virtio/virtio_vfio_user.o
00:03:38.512    CC lib/virtio/virtio_pci.o
00:03:38.512    CC lib/init/rpc.o
00:03:38.512    CC lib/fsdev/fsdev.o
00:03:38.512    CC lib/blob/blobstore.o
00:03:38.512    CC lib/fsdev/fsdev_io.o
00:03:38.512    CC lib/blob/request.o
00:03:38.512    CC lib/fsdev/fsdev_rpc.o
00:03:38.512    CC lib/blob/zeroes.o
00:03:38.512    CC lib/blob/blob_bs_dev.o
00:03:38.803    LIB libspdk_init.a
00:03:38.803    SO libspdk_init.so.6.0
00:03:38.803    SYMLINK libspdk_init.so
00:03:38.803    LIB libspdk_vfu_tgt.a
00:03:39.086    LIB libspdk_virtio.a
00:03:39.086    SO libspdk_vfu_tgt.so.3.0
00:03:39.086    SO libspdk_virtio.so.7.0
00:03:39.086    SYMLINK libspdk_vfu_tgt.so
00:03:39.086    SYMLINK libspdk_virtio.so
00:03:39.086    CC lib/event/app.o
00:03:39.086    CC lib/event/reactor.o
00:03:39.086    CC lib/event/log_rpc.o
00:03:39.087    CC lib/event/app_rpc.o
00:03:39.087    CC lib/event/scheduler_static.o
00:03:39.345    LIB libspdk_fsdev.a
00:03:39.345    SO libspdk_fsdev.so.2.0
00:03:39.345    SYMLINK libspdk_fsdev.so
00:03:39.603    CC lib/fuse_dispatcher/fuse_dispatcher.o
00:03:39.603    LIB libspdk_event.a
00:03:39.603    SO libspdk_event.so.14.0
00:03:39.862    SYMLINK libspdk_event.so
00:03:39.862    LIB libspdk_nvme.a
00:03:39.862    LIB libspdk_accel.a
00:03:40.121    SO libspdk_accel.so.16.0
00:03:40.121    SYMLINK libspdk_accel.so
00:03:40.121    SO libspdk_nvme.so.15.0
00:03:40.121    CC lib/bdev/bdev.o
00:03:40.121    CC lib/bdev/bdev_rpc.o
00:03:40.121    CC lib/bdev/bdev_zone.o
00:03:40.121    CC lib/bdev/part.o
00:03:40.121    CC lib/bdev/scsi_nvme.o
00:03:40.379    SYMLINK libspdk_nvme.so
00:03:40.379    LIB libspdk_fuse_dispatcher.a
00:03:40.379    SO libspdk_fuse_dispatcher.so.1.0
00:03:40.636    SYMLINK libspdk_fuse_dispatcher.so
00:03:43.175    LIB libspdk_blob.a
00:03:43.175    SO libspdk_blob.so.11.0
00:03:43.175    SYMLINK libspdk_blob.so
00:03:43.175    CC lib/lvol/lvol.o
00:03:43.175    CC lib/blobfs/blobfs.o
00:03:43.175    CC lib/blobfs/tree.o
00:03:43.741    LIB libspdk_bdev.a
00:03:43.741    SO libspdk_bdev.so.17.0
00:03:43.741    SYMLINK libspdk_bdev.so
00:03:44.006    CC lib/nbd/nbd.o
00:03:44.006    CC lib/scsi/dev.o
00:03:44.006    CC lib/nbd/nbd_rpc.o
00:03:44.006    CC lib/nvmf/ctrlr.o
00:03:44.006    CC lib/scsi/lun.o
00:03:44.006    CC lib/nvmf/ctrlr_discovery.o
00:03:44.006    CC lib/scsi/port.o
00:03:44.006    CC lib/scsi/scsi.o
00:03:44.006    CC lib/nvmf/ctrlr_bdev.o
00:03:44.006    CC lib/scsi/scsi_bdev.o
00:03:44.006    CC lib/nvmf/subsystem.o
00:03:44.006    CC lib/scsi/scsi_pr.o
00:03:44.006    CC lib/ublk/ublk.o
00:03:44.006    CC lib/nvmf/nvmf.o
00:03:44.006    CC lib/scsi/scsi_rpc.o
00:03:44.006    CC lib/nvmf/nvmf_rpc.o
00:03:44.006    CC lib/ftl/ftl_core.o
00:03:44.006    CC lib/ublk/ublk_rpc.o
00:03:44.006    CC lib/nvmf/transport.o
00:03:44.006    CC lib/ftl/ftl_init.o
00:03:44.006    CC lib/scsi/task.o
00:03:44.006    CC lib/ftl/ftl_layout.o
00:03:44.006    CC lib/nvmf/tcp.o
00:03:44.006    CC lib/ftl/ftl_debug.o
00:03:44.006    CC lib/nvmf/stubs.o
00:03:44.006    CC lib/nvmf/mdns_server.o
00:03:44.006    CC lib/ftl/ftl_io.o
00:03:44.006    CC lib/nvmf/vfio_user.o
00:03:44.006    CC lib/ftl/ftl_sb.o
00:03:44.006    CC lib/ftl/ftl_l2p.o
00:03:44.006    CC lib/nvmf/rdma.o
00:03:44.006    CC lib/ftl/ftl_l2p_flat.o
00:03:44.006    CC lib/ftl/ftl_nv_cache.o
00:03:44.006    CC lib/nvmf/auth.o
00:03:44.006    CC lib/ftl/ftl_band.o
00:03:44.006    CC lib/ftl/ftl_band_ops.o
00:03:44.006    CC lib/ftl/ftl_writer.o
00:03:44.006    CC lib/ftl/ftl_rq.o
00:03:44.006    CC lib/ftl/ftl_reloc.o
00:03:44.006    CC lib/ftl/ftl_l2p_cache.o
00:03:44.006    CC lib/ftl/ftl_p2l.o
00:03:44.006    CC lib/ftl/ftl_p2l_log.o
00:03:44.006    CC lib/ftl/mngt/ftl_mngt.o
00:03:44.006    CC lib/ftl/mngt/ftl_mngt_bdev.o
00:03:44.006    CC lib/ftl/mngt/ftl_mngt_shutdown.o
00:03:44.006    CC lib/ftl/mngt/ftl_mngt_startup.o
00:03:44.265    LIB libspdk_blobfs.a
00:03:44.265    SO libspdk_blobfs.so.10.0
00:03:44.265    CC lib/ftl/mngt/ftl_mngt_md.o
00:03:44.265    CC lib/ftl/mngt/ftl_mngt_misc.o
00:03:44.265    CC lib/ftl/mngt/ftl_mngt_ioch.o
00:03:44.265    CC lib/ftl/mngt/ftl_mngt_l2p.o
00:03:44.265    CC lib/ftl/mngt/ftl_mngt_band.o
00:03:44.265    CC lib/ftl/mngt/ftl_mngt_self_test.o
00:03:44.265    SYMLINK libspdk_blobfs.so
00:03:44.523    CC lib/ftl/mngt/ftl_mngt_p2l.o
00:03:44.523    CC lib/ftl/mngt/ftl_mngt_upgrade.o
00:03:44.523    CC lib/ftl/mngt/ftl_mngt_recovery.o
00:03:44.523    CC lib/ftl/utils/ftl_conf.o
00:03:44.523    CC lib/ftl/utils/ftl_md.o
00:03:44.523    CC lib/ftl/utils/ftl_mempool.o
00:03:44.523    LIB libspdk_lvol.a
00:03:44.523    CC lib/ftl/utils/ftl_bitmap.o
00:03:44.523    CC lib/ftl/utils/ftl_property.o
00:03:44.523    SO libspdk_lvol.so.10.0
00:03:44.523    CC lib/ftl/utils/ftl_layout_tracker_bdev.o
00:03:44.523    CC lib/ftl/upgrade/ftl_layout_upgrade.o
00:03:44.523    CC lib/ftl/upgrade/ftl_sb_upgrade.o
00:03:44.523    CC lib/ftl/upgrade/ftl_p2l_upgrade.o
00:03:44.523    CC lib/ftl/upgrade/ftl_band_upgrade.o
00:03:44.784    SYMLINK libspdk_lvol.so
00:03:44.784    CC lib/ftl/upgrade/ftl_chunk_upgrade.o
00:03:44.784    CC lib/ftl/upgrade/ftl_trim_upgrade.o
00:03:44.784    CC lib/ftl/upgrade/ftl_sb_v3.o
00:03:44.784    CC lib/ftl/upgrade/ftl_sb_v5.o
00:03:44.784    CC lib/ftl/nvc/ftl_nvc_dev.o
00:03:44.784    CC lib/ftl/nvc/ftl_nvc_bdev_vss.o
00:03:44.784    CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o
00:03:44.784    CC lib/ftl/nvc/ftl_nvc_bdev_common.o
00:03:44.784    CC lib/ftl/base/ftl_base_dev.o
00:03:44.784    CC lib/ftl/base/ftl_base_bdev.o
00:03:44.784    CC lib/ftl/ftl_trace.o
00:03:45.042    LIB libspdk_nbd.a
00:03:45.042    SO libspdk_nbd.so.7.0
00:03:45.042    SYMLINK libspdk_nbd.so
00:03:45.300    LIB libspdk_scsi.a
00:03:45.300    SO libspdk_scsi.so.9.0
00:03:45.300    LIB libspdk_ublk.a
00:03:45.300    SO libspdk_ublk.so.3.0
00:03:45.300    SYMLINK libspdk_scsi.so
00:03:45.300    SYMLINK libspdk_ublk.so
00:03:45.559    CC lib/vhost/vhost.o
00:03:45.559    CC lib/iscsi/conn.o
00:03:45.559    CC lib/vhost/vhost_rpc.o
00:03:45.559    CC lib/iscsi/init_grp.o
00:03:45.559    CC lib/vhost/vhost_scsi.o
00:03:45.559    CC lib/iscsi/iscsi.o
00:03:45.559    CC lib/vhost/vhost_blk.o
00:03:45.559    CC lib/iscsi/param.o
00:03:45.559    CC lib/vhost/rte_vhost_user.o
00:03:45.559    CC lib/iscsi/portal_grp.o
00:03:45.559    CC lib/iscsi/tgt_node.o
00:03:45.559    CC lib/iscsi/iscsi_subsystem.o
00:03:45.559    CC lib/iscsi/iscsi_rpc.o
00:03:45.559    CC lib/iscsi/task.o
00:03:45.817    LIB libspdk_ftl.a
00:03:46.075    SO libspdk_ftl.so.9.0
00:03:46.333    SYMLINK libspdk_ftl.so
00:03:46.898    LIB libspdk_vhost.a
00:03:46.898    SO libspdk_vhost.so.8.0
00:03:47.155    SYMLINK libspdk_vhost.so
00:03:47.413    LIB libspdk_iscsi.a
00:03:47.413    LIB libspdk_nvmf.a
00:03:47.413    SO libspdk_iscsi.so.8.0
00:03:47.671    SO libspdk_nvmf.so.20.0
00:03:47.671    SYMLINK libspdk_iscsi.so
00:03:47.930    SYMLINK libspdk_nvmf.so
00:03:48.188    CC module/env_dpdk/env_dpdk_rpc.o
00:03:48.188    CC module/vfu_device/vfu_virtio.o
00:03:48.188    CC module/vfu_device/vfu_virtio_blk.o
00:03:48.188    CC module/vfu_device/vfu_virtio_scsi.o
00:03:48.188    CC module/vfu_device/vfu_virtio_rpc.o
00:03:48.188    CC module/vfu_device/vfu_virtio_fs.o
00:03:48.188    CC module/scheduler/dynamic/scheduler_dynamic.o
00:03:48.188    CC module/sock/posix/posix.o
00:03:48.188    CC module/accel/error/accel_error.o
00:03:48.188    CC module/accel/error/accel_error_rpc.o
00:03:48.188    CC module/keyring/linux/keyring.o
00:03:48.188    CC module/scheduler/gscheduler/gscheduler.o
00:03:48.188    CC module/keyring/file/keyring.o
00:03:48.188    CC module/keyring/linux/keyring_rpc.o
00:03:48.188    CC module/scheduler/dpdk_governor/dpdk_governor.o
00:03:48.188    CC module/keyring/file/keyring_rpc.o
00:03:48.188    CC module/blob/bdev/blob_bdev.o
00:03:48.188    CC module/accel/iaa/accel_iaa.o
00:03:48.188    CC module/accel/ioat/accel_ioat.o
00:03:48.188    CC module/accel/iaa/accel_iaa_rpc.o
00:03:48.188    CC module/fsdev/aio/fsdev_aio.o
00:03:48.188    CC module/accel/ioat/accel_ioat_rpc.o
00:03:48.188    CC module/accel/dsa/accel_dsa.o
00:03:48.188    CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev.o
00:03:48.188    CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev_rpc.o
00:03:48.188    CC module/accel/dsa/accel_dsa_rpc.o
00:03:48.188    CC module/fsdev/aio/fsdev_aio_rpc.o
00:03:48.188    CC module/fsdev/aio/linux_aio_mgr.o
00:03:48.188    LIB libspdk_env_dpdk_rpc.a
00:03:48.188    SO libspdk_env_dpdk_rpc.so.6.0
00:03:48.446    SYMLINK libspdk_env_dpdk_rpc.so
00:03:48.446    LIB libspdk_scheduler_dpdk_governor.a
00:03:48.446    SO libspdk_scheduler_dpdk_governor.so.4.0
00:03:48.446    LIB libspdk_keyring_linux.a
00:03:48.446    LIB libspdk_keyring_file.a
00:03:48.446    LIB libspdk_accel_ioat.a
00:03:48.446    SO libspdk_keyring_linux.so.1.0
00:03:48.446    SO libspdk_keyring_file.so.2.0
00:03:48.446    SO libspdk_accel_ioat.so.6.0
00:03:48.446    LIB libspdk_accel_iaa.a
00:03:48.446    SYMLINK libspdk_scheduler_dpdk_governor.so
00:03:48.446    LIB libspdk_scheduler_gscheduler.a
00:03:48.446    SO libspdk_accel_iaa.so.3.0
00:03:48.446    SO libspdk_scheduler_gscheduler.so.4.0
00:03:48.446    SYMLINK libspdk_keyring_linux.so
00:03:48.446    SYMLINK libspdk_keyring_file.so
00:03:48.446    SYMLINK libspdk_accel_ioat.so
00:03:48.446    LIB libspdk_scheduler_dynamic.a
00:03:48.446    LIB libspdk_accel_error.a
00:03:48.446    SO libspdk_scheduler_dynamic.so.4.0
00:03:48.446    SYMLINK libspdk_accel_iaa.so
00:03:48.446    LIB libspdk_blob_bdev.a
00:03:48.446    SYMLINK libspdk_scheduler_gscheduler.so
00:03:48.446    SO libspdk_accel_error.so.2.0
00:03:48.446    SO libspdk_blob_bdev.so.11.0
00:03:48.704    LIB libspdk_accel_dsa.a
00:03:48.704    SYMLINK libspdk_scheduler_dynamic.so
00:03:48.704    SO libspdk_accel_dsa.so.5.0
00:03:48.704    SYMLINK libspdk_accel_error.so
00:03:48.704    SYMLINK libspdk_blob_bdev.so
00:03:48.704    SYMLINK libspdk_accel_dsa.so
00:03:48.966    CC module/bdev/malloc/bdev_malloc.o
00:03:48.967    CC module/bdev/split/vbdev_split.o
00:03:48.967    CC module/bdev/raid/bdev_raid.o
00:03:48.967    CC module/bdev/split/vbdev_split_rpc.o
00:03:48.967    CC module/bdev/delay/vbdev_delay.o
00:03:48.967    CC module/bdev/malloc/bdev_malloc_rpc.o
00:03:48.967    CC module/bdev/null/bdev_null.o
00:03:48.967    CC module/bdev/lvol/vbdev_lvol.o
00:03:48.967    CC module/bdev/iscsi/bdev_iscsi.o
00:03:48.967    CC module/bdev/raid/bdev_raid_rpc.o
00:03:48.967    CC module/bdev/delay/vbdev_delay_rpc.o
00:03:48.967    CC module/bdev/error/vbdev_error.o
00:03:48.967    CC module/bdev/lvol/vbdev_lvol_rpc.o
00:03:48.967    CC module/bdev/null/bdev_null_rpc.o
00:03:48.967    CC module/bdev/nvme/bdev_nvme.o
00:03:48.967    CC module/bdev/raid/bdev_raid_sb.o
00:03:48.967    CC module/bdev/raid/raid0.o
00:03:48.967    CC module/bdev/virtio/bdev_virtio_scsi.o
00:03:48.967    CC module/bdev/ftl/bdev_ftl.o
00:03:48.967    CC module/bdev/nvme/bdev_nvme_rpc.o
00:03:48.967    CC module/bdev/error/vbdev_error_rpc.o
00:03:48.967    CC module/bdev/iscsi/bdev_iscsi_rpc.o
00:03:48.967    CC module/bdev/virtio/bdev_virtio_blk.o
00:03:48.967    CC module/bdev/ftl/bdev_ftl_rpc.o
00:03:48.967    CC module/bdev/zone_block/vbdev_zone_block.o
00:03:48.967    CC module/bdev/passthru/vbdev_passthru.o
00:03:48.967    CC module/bdev/raid/raid1.o
00:03:48.967    CC module/bdev/passthru/vbdev_passthru_rpc.o
00:03:48.967    CC module/bdev/nvme/nvme_rpc.o
00:03:48.967    CC module/bdev/virtio/bdev_virtio_rpc.o
00:03:48.967    CC module/bdev/raid/concat.o
00:03:48.967    CC module/bdev/nvme/bdev_mdns_client.o
00:03:48.967    CC module/bdev/zone_block/vbdev_zone_block_rpc.o
00:03:48.967    CC module/bdev/gpt/gpt.o
00:03:48.967    CC module/bdev/aio/bdev_aio.o
00:03:48.967    CC module/bdev/nvme/vbdev_opal.o
00:03:48.967    CC module/bdev/nvme/vbdev_opal_rpc.o
00:03:48.967    CC module/bdev/gpt/vbdev_gpt.o
00:03:48.967    CC module/bdev/nvme/bdev_nvme_cuse_rpc.o
00:03:48.967    CC module/bdev/aio/bdev_aio_rpc.o
00:03:48.967    CC module/blobfs/bdev/blobfs_bdev.o
00:03:48.967    CC module/blobfs/bdev/blobfs_bdev_rpc.o
00:03:48.967    CC module/bdev/crypto/vbdev_crypto.o
00:03:48.967    CC module/bdev/crypto/vbdev_crypto_rpc.o
00:03:49.226    LIB libspdk_vfu_device.a
00:03:49.226    SO libspdk_vfu_device.so.3.0
00:03:49.226    SYMLINK libspdk_vfu_device.so
00:03:49.226    LIB libspdk_blobfs_bdev.a
00:03:49.226    SO libspdk_blobfs_bdev.so.6.0
00:03:49.484    LIB libspdk_fsdev_aio.a
00:03:49.484    SYMLINK libspdk_blobfs_bdev.so
00:03:49.484    SO libspdk_fsdev_aio.so.1.0
00:03:49.484    LIB libspdk_bdev_split.a
00:03:49.484    LIB libspdk_sock_posix.a
00:03:49.484    SO libspdk_bdev_split.so.6.0
00:03:49.484    LIB libspdk_bdev_null.a
00:03:49.484    LIB libspdk_bdev_gpt.a
00:03:49.484    SO libspdk_sock_posix.so.6.0
00:03:49.484    SO libspdk_bdev_null.so.6.0
00:03:49.484    SYMLINK libspdk_fsdev_aio.so
00:03:49.484    SO libspdk_bdev_gpt.so.6.0
00:03:49.484    LIB libspdk_bdev_passthru.a
00:03:49.484    SYMLINK libspdk_bdev_split.so
00:03:49.484    SO libspdk_bdev_passthru.so.6.0
00:03:49.484    LIB libspdk_bdev_error.a
00:03:49.484    LIB libspdk_bdev_crypto.a
00:03:49.484    LIB libspdk_bdev_zone_block.a
00:03:49.484    LIB libspdk_bdev_ftl.a
00:03:49.484    SYMLINK libspdk_sock_posix.so
00:03:49.484    SYMLINK libspdk_bdev_null.so
00:03:49.484    SYMLINK libspdk_bdev_gpt.so
00:03:49.484    SO libspdk_bdev_error.so.6.0
00:03:49.484    SO libspdk_bdev_zone_block.so.6.0
00:03:49.484    SO libspdk_bdev_crypto.so.6.0
00:03:49.484    SO libspdk_bdev_ftl.so.6.0
00:03:49.484    SYMLINK libspdk_bdev_passthru.so
00:03:49.484    LIB libspdk_bdev_iscsi.a
00:03:49.743    SO libspdk_bdev_iscsi.so.6.0
00:03:49.743    SYMLINK libspdk_bdev_zone_block.so
00:03:49.743    SYMLINK libspdk_bdev_error.so
00:03:49.743    SYMLINK libspdk_bdev_crypto.so
00:03:49.743    SYMLINK libspdk_bdev_ftl.so
00:03:49.743    LIB libspdk_bdev_aio.a
00:03:49.743    LIB libspdk_bdev_malloc.a
00:03:49.743    LIB libspdk_bdev_delay.a
00:03:49.743    SO libspdk_bdev_aio.so.6.0
00:03:49.743    SO libspdk_bdev_malloc.so.6.0
00:03:49.743    SO libspdk_bdev_delay.so.6.0
00:03:49.743    SYMLINK libspdk_bdev_iscsi.so
00:03:49.743    SYMLINK libspdk_bdev_aio.so
00:03:49.743    SYMLINK libspdk_bdev_malloc.so
00:03:49.743    SYMLINK libspdk_bdev_delay.so
00:03:50.001    LIB libspdk_bdev_lvol.a
00:03:50.001    LIB libspdk_bdev_virtio.a
00:03:50.001    SO libspdk_bdev_lvol.so.6.0
00:03:50.001    SO libspdk_bdev_virtio.so.6.0
00:03:50.001    SYMLINK libspdk_bdev_lvol.so
00:03:50.001    SYMLINK libspdk_bdev_virtio.so
00:03:50.566    LIB libspdk_bdev_raid.a
00:03:50.566    SO libspdk_bdev_raid.so.6.0
00:03:50.566    SYMLINK libspdk_bdev_raid.so
00:03:52.464    LIB libspdk_accel_dpdk_cryptodev.a
00:03:52.464    SO libspdk_accel_dpdk_cryptodev.so.3.0
00:03:52.464    SYMLINK libspdk_accel_dpdk_cryptodev.so
00:03:52.464    LIB libspdk_bdev_nvme.a
00:03:52.464    SO libspdk_bdev_nvme.so.7.1
00:03:52.722    SYMLINK libspdk_bdev_nvme.so
00:03:52.980    CC module/event/subsystems/iobuf/iobuf.o
00:03:52.980    CC module/event/subsystems/iobuf/iobuf_rpc.o
00:03:52.980    CC module/event/subsystems/scheduler/scheduler.o
00:03:52.980    CC module/event/subsystems/vmd/vmd.o
00:03:52.980    CC module/event/subsystems/vfu_tgt/vfu_tgt.o
00:03:52.980    CC module/event/subsystems/fsdev/fsdev.o
00:03:52.980    CC module/event/subsystems/vmd/vmd_rpc.o
00:03:52.980    CC module/event/subsystems/sock/sock.o
00:03:52.980    CC module/event/subsystems/vhost_blk/vhost_blk.o
00:03:52.980    CC module/event/subsystems/keyring/keyring.o
00:03:53.239    LIB libspdk_event_keyring.a
00:03:53.239    LIB libspdk_event_vhost_blk.a
00:03:53.239    LIB libspdk_event_fsdev.a
00:03:53.239    LIB libspdk_event_scheduler.a
00:03:53.239    LIB libspdk_event_vfu_tgt.a
00:03:53.239    LIB libspdk_event_vmd.a
00:03:53.239    LIB libspdk_event_sock.a
00:03:53.239    SO libspdk_event_keyring.so.1.0
00:03:53.239    SO libspdk_event_vhost_blk.so.3.0
00:03:53.239    SO libspdk_event_fsdev.so.1.0
00:03:53.239    LIB libspdk_event_iobuf.a
00:03:53.239    SO libspdk_event_scheduler.so.4.0
00:03:53.239    SO libspdk_event_vfu_tgt.so.3.0
00:03:53.239    SO libspdk_event_sock.so.5.0
00:03:53.239    SO libspdk_event_vmd.so.6.0
00:03:53.239    SO libspdk_event_iobuf.so.3.0
00:03:53.239    SYMLINK libspdk_event_keyring.so
00:03:53.239    SYMLINK libspdk_event_fsdev.so
00:03:53.239    SYMLINK libspdk_event_vhost_blk.so
00:03:53.239    SYMLINK libspdk_event_vfu_tgt.so
00:03:53.239    SYMLINK libspdk_event_scheduler.so
00:03:53.239    SYMLINK libspdk_event_sock.so
00:03:53.239    SYMLINK libspdk_event_vmd.so
00:03:53.239    SYMLINK libspdk_event_iobuf.so
00:03:53.496    CC module/event/subsystems/accel/accel.o
00:03:53.755    LIB libspdk_event_accel.a
00:03:53.755    SO libspdk_event_accel.so.6.0
00:03:53.755    SYMLINK libspdk_event_accel.so
00:03:54.013    CC module/event/subsystems/bdev/bdev.o
00:03:54.013    LIB libspdk_event_bdev.a
00:03:54.013    SO libspdk_event_bdev.so.6.0
00:03:54.276    SYMLINK libspdk_event_bdev.so
00:03:54.276    CC module/event/subsystems/nbd/nbd.o
00:03:54.276    CC module/event/subsystems/scsi/scsi.o
00:03:54.276    CC module/event/subsystems/nvmf/nvmf_rpc.o
00:03:54.276    CC module/event/subsystems/nvmf/nvmf_tgt.o
00:03:54.276    CC module/event/subsystems/ublk/ublk.o
00:03:54.535    LIB libspdk_event_nbd.a
00:03:54.535    LIB libspdk_event_ublk.a
00:03:54.535    LIB libspdk_event_scsi.a
00:03:54.535    SO libspdk_event_nbd.so.6.0
00:03:54.535    SO libspdk_event_ublk.so.3.0
00:03:54.535    SO libspdk_event_scsi.so.6.0
00:03:54.535    SYMLINK libspdk_event_nbd.so
00:03:54.535    SYMLINK libspdk_event_ublk.so
00:03:54.535    SYMLINK libspdk_event_scsi.so
00:03:54.535    LIB libspdk_event_nvmf.a
00:03:54.535    SO libspdk_event_nvmf.so.6.0
00:03:54.794    SYMLINK libspdk_event_nvmf.so
00:03:54.794    CC module/event/subsystems/vhost_scsi/vhost_scsi.o
00:03:54.794    CC module/event/subsystems/iscsi/iscsi.o
00:03:54.794    LIB libspdk_event_vhost_scsi.a
00:03:54.794    LIB libspdk_event_iscsi.a
00:03:54.794    SO libspdk_event_vhost_scsi.so.3.0
00:03:54.794    SO libspdk_event_iscsi.so.6.0
00:03:55.053    SYMLINK libspdk_event_vhost_scsi.so
00:03:55.053    SYMLINK libspdk_event_iscsi.so
00:03:55.053    SO libspdk.so.6.0
00:03:55.053    SYMLINK libspdk.so
00:03:55.318    CC app/trace_record/trace_record.o
00:03:55.318    CXX app/trace/trace.o
00:03:55.318    CC app/spdk_lspci/spdk_lspci.o
00:03:55.318    CC app/spdk_nvme_perf/perf.o
00:03:55.318    CC app/spdk_nvme_discover/discovery_aer.o
00:03:55.318    CC app/spdk_nvme_identify/identify.o
00:03:55.318    TEST_HEADER include/spdk/accel.h
00:03:55.318    CC test/rpc_client/rpc_client_test.o
00:03:55.318    TEST_HEADER include/spdk/accel_module.h
00:03:55.318    CC app/spdk_top/spdk_top.o
00:03:55.318    TEST_HEADER include/spdk/assert.h
00:03:55.318    TEST_HEADER include/spdk/barrier.h
00:03:55.318    TEST_HEADER include/spdk/base64.h
00:03:55.318    TEST_HEADER include/spdk/bdev.h
00:03:55.318    TEST_HEADER include/spdk/bdev_module.h
00:03:55.318    TEST_HEADER include/spdk/bdev_zone.h
00:03:55.318    TEST_HEADER include/spdk/bit_array.h
00:03:55.318    TEST_HEADER include/spdk/bit_pool.h
00:03:55.318    TEST_HEADER include/spdk/blob_bdev.h
00:03:55.318    TEST_HEADER include/spdk/blobfs_bdev.h
00:03:55.318    TEST_HEADER include/spdk/blobfs.h
00:03:55.318    TEST_HEADER include/spdk/blob.h
00:03:55.318    TEST_HEADER include/spdk/conf.h
00:03:55.318    TEST_HEADER include/spdk/config.h
00:03:55.318    TEST_HEADER include/spdk/cpuset.h
00:03:55.318    TEST_HEADER include/spdk/crc16.h
00:03:55.318    TEST_HEADER include/spdk/crc32.h
00:03:55.318    TEST_HEADER include/spdk/crc64.h
00:03:55.318    TEST_HEADER include/spdk/dma.h
00:03:55.318    TEST_HEADER include/spdk/dif.h
00:03:55.318    TEST_HEADER include/spdk/endian.h
00:03:55.318    TEST_HEADER include/spdk/env_dpdk.h
00:03:55.318    TEST_HEADER include/spdk/env.h
00:03:55.318    TEST_HEADER include/spdk/event.h
00:03:55.318    TEST_HEADER include/spdk/fd_group.h
00:03:55.318    TEST_HEADER include/spdk/file.h
00:03:55.318    TEST_HEADER include/spdk/fd.h
00:03:55.318    TEST_HEADER include/spdk/fsdev.h
00:03:55.318    TEST_HEADER include/spdk/fsdev_module.h
00:03:55.318    TEST_HEADER include/spdk/ftl.h
00:03:55.318    TEST_HEADER include/spdk/fuse_dispatcher.h
00:03:55.318    TEST_HEADER include/spdk/gpt_spec.h
00:03:55.318    TEST_HEADER include/spdk/hexlify.h
00:03:55.318    TEST_HEADER include/spdk/histogram_data.h
00:03:55.318    TEST_HEADER include/spdk/idxd.h
00:03:55.318    TEST_HEADER include/spdk/idxd_spec.h
00:03:55.318    TEST_HEADER include/spdk/init.h
00:03:55.318    TEST_HEADER include/spdk/ioat.h
00:03:55.318    TEST_HEADER include/spdk/ioat_spec.h
00:03:55.318    TEST_HEADER include/spdk/iscsi_spec.h
00:03:55.318    TEST_HEADER include/spdk/json.h
00:03:55.318    TEST_HEADER include/spdk/jsonrpc.h
00:03:55.318    TEST_HEADER include/spdk/keyring.h
00:03:55.318    TEST_HEADER include/spdk/keyring_module.h
00:03:55.318    TEST_HEADER include/spdk/likely.h
00:03:55.318    TEST_HEADER include/spdk/log.h
00:03:55.318    TEST_HEADER include/spdk/lvol.h
00:03:55.318    TEST_HEADER include/spdk/md5.h
00:03:55.318    TEST_HEADER include/spdk/memory.h
00:03:55.318    TEST_HEADER include/spdk/mmio.h
00:03:55.318    TEST_HEADER include/spdk/nbd.h
00:03:55.318    TEST_HEADER include/spdk/net.h
00:03:55.318    TEST_HEADER include/spdk/notify.h
00:03:55.318    TEST_HEADER include/spdk/nvme.h
00:03:55.318    TEST_HEADER include/spdk/nvme_intel.h
00:03:55.318    TEST_HEADER include/spdk/nvme_ocssd.h
00:03:55.318    TEST_HEADER include/spdk/nvme_ocssd_spec.h
00:03:55.318    TEST_HEADER include/spdk/nvme_spec.h
00:03:55.318    TEST_HEADER include/spdk/nvme_zns.h
00:03:55.318    TEST_HEADER include/spdk/nvmf_cmd.h
00:03:55.318    TEST_HEADER include/spdk/nvmf_fc_spec.h
00:03:55.318    TEST_HEADER include/spdk/nvmf.h
00:03:55.318    TEST_HEADER include/spdk/nvmf_spec.h
00:03:55.318    TEST_HEADER include/spdk/nvmf_transport.h
00:03:55.318    TEST_HEADER include/spdk/opal.h
00:03:55.318    TEST_HEADER include/spdk/opal_spec.h
00:03:55.318    TEST_HEADER include/spdk/pci_ids.h
00:03:55.318    TEST_HEADER include/spdk/pipe.h
00:03:55.318    TEST_HEADER include/spdk/queue.h
00:03:55.318    TEST_HEADER include/spdk/reduce.h
00:03:55.318    TEST_HEADER include/spdk/rpc.h
00:03:55.318    TEST_HEADER include/spdk/scheduler.h
00:03:55.318    TEST_HEADER include/spdk/scsi.h
00:03:55.318    TEST_HEADER include/spdk/scsi_spec.h
00:03:55.318    TEST_HEADER include/spdk/sock.h
00:03:55.318    TEST_HEADER include/spdk/string.h
00:03:55.318    TEST_HEADER include/spdk/stdinc.h
00:03:55.318    TEST_HEADER include/spdk/thread.h
00:03:55.318    TEST_HEADER include/spdk/trace_parser.h
00:03:55.318    TEST_HEADER include/spdk/trace.h
00:03:55.318    TEST_HEADER include/spdk/tree.h
00:03:55.318    TEST_HEADER include/spdk/ublk.h
00:03:55.318    TEST_HEADER include/spdk/util.h
00:03:55.318    TEST_HEADER include/spdk/uuid.h
00:03:55.318    TEST_HEADER include/spdk/vfio_user_pci.h
00:03:55.318    TEST_HEADER include/spdk/version.h
00:03:55.318    TEST_HEADER include/spdk/vfio_user_spec.h
00:03:55.318    TEST_HEADER include/spdk/vhost.h
00:03:55.318    TEST_HEADER include/spdk/vmd.h
00:03:55.318    TEST_HEADER include/spdk/xor.h
00:03:55.318    TEST_HEADER include/spdk/zipf.h
00:03:55.318    CXX test/cpp_headers/accel.o
00:03:55.318    CXX test/cpp_headers/accel_module.o
00:03:55.318    CXX test/cpp_headers/assert.o
00:03:55.318    CXX test/cpp_headers/barrier.o
00:03:55.318    CXX test/cpp_headers/base64.o
00:03:55.318    CXX test/cpp_headers/bdev.o
00:03:55.318    CXX test/cpp_headers/bdev_module.o
00:03:55.318    CXX test/cpp_headers/bdev_zone.o
00:03:55.318    CXX test/cpp_headers/bit_array.o
00:03:55.318    CC app/nvmf_tgt/nvmf_main.o
00:03:55.318    CXX test/cpp_headers/bit_pool.o
00:03:55.318    CXX test/cpp_headers/blob_bdev.o
00:03:55.318    CXX test/cpp_headers/blobfs_bdev.o
00:03:55.318    CXX test/cpp_headers/blobfs.o
00:03:55.318    CXX test/cpp_headers/blob.o
00:03:55.318    CXX test/cpp_headers/conf.o
00:03:55.318    CC examples/interrupt_tgt/interrupt_tgt.o
00:03:55.318    CXX test/cpp_headers/config.o
00:03:55.318    CXX test/cpp_headers/cpuset.o
00:03:55.318    CXX test/cpp_headers/crc16.o
00:03:55.318    CC app/spdk_dd/spdk_dd.o
00:03:55.318    CC app/iscsi_tgt/iscsi_tgt.o
00:03:55.318    CXX test/cpp_headers/crc32.o
00:03:55.318    CC test/env/pci/pci_ut.o
00:03:55.318    CC app/spdk_tgt/spdk_tgt.o
00:03:55.318    CC test/env/vtophys/vtophys.o
00:03:55.318    CC test/app/jsoncat/jsoncat.o
00:03:55.318    CC test/thread/poller_perf/poller_perf.o
00:03:55.318    CC examples/ioat/perf/perf.o
00:03:55.318    CC test/env/memory/memory_ut.o
00:03:55.318    CC test/env/env_dpdk_post_init/env_dpdk_post_init.o
00:03:55.318    CC examples/util/zipf/zipf.o
00:03:55.318    CC examples/ioat/verify/verify.o
00:03:55.318    CC app/fio/nvme/fio_plugin.o
00:03:55.318    CC test/app/stub/stub.o
00:03:55.319    CC test/app/histogram_perf/histogram_perf.o
00:03:55.580    CC test/dma/test_dma/test_dma.o
00:03:55.580    CC app/fio/bdev/fio_plugin.o
00:03:55.580    CC test/app/bdev_svc/bdev_svc.o
00:03:55.580    CC test/env/mem_callbacks/mem_callbacks.o
00:03:55.580    LINK spdk_lspci
00:03:55.580    CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o
00:03:55.842    LINK rpc_client_test
00:03:55.842    LINK spdk_nvme_discover
00:03:55.842    LINK jsoncat
00:03:55.842    LINK vtophys
00:03:55.842    LINK poller_perf
00:03:55.842    LINK nvmf_tgt
00:03:55.842    LINK interrupt_tgt
00:03:55.842    CXX test/cpp_headers/crc64.o
00:03:55.842    LINK histogram_perf
00:03:55.842    LINK zipf
00:03:55.842    CXX test/cpp_headers/dif.o
00:03:55.842    CXX test/cpp_headers/dma.o
00:03:55.842    CXX test/cpp_headers/endian.o
00:03:55.842    CXX test/cpp_headers/env_dpdk.o
00:03:55.843    CXX test/cpp_headers/env.o
00:03:55.843    CXX test/cpp_headers/event.o
00:03:55.843    CXX test/cpp_headers/fd_group.o
00:03:55.843    CXX test/cpp_headers/fd.o
00:03:55.843    CXX test/cpp_headers/file.o
00:03:55.843    LINK env_dpdk_post_init
00:03:55.843    CXX test/cpp_headers/fsdev.o
00:03:55.843    LINK iscsi_tgt
00:03:55.843    CXX test/cpp_headers/fsdev_module.o
00:03:55.843    LINK stub
00:03:55.843    CXX test/cpp_headers/ftl.o
00:03:55.843    CXX test/cpp_headers/fuse_dispatcher.o
00:03:55.843    CXX test/cpp_headers/gpt_spec.o
00:03:55.843    LINK spdk_trace_record
00:03:55.843    CXX test/cpp_headers/hexlify.o
00:03:55.843    LINK bdev_svc
00:03:55.843    LINK spdk_tgt
00:03:56.111    LINK ioat_perf
00:03:56.111    CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o
00:03:56.111    LINK verify
00:03:56.111    CXX test/cpp_headers/histogram_data.o
00:03:56.111    CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o
00:03:56.111    CXX test/cpp_headers/idxd.o
00:03:56.111    CXX test/cpp_headers/idxd_spec.o
00:03:56.111    CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o
00:03:56.111    CXX test/cpp_headers/init.o
00:03:56.111    CXX test/cpp_headers/ioat.o
00:03:56.374    CXX test/cpp_headers/ioat_spec.o
00:03:56.374    LINK spdk_trace
00:03:56.374    CXX test/cpp_headers/iscsi_spec.o
00:03:56.374    CXX test/cpp_headers/json.o
00:03:56.374    CXX test/cpp_headers/jsonrpc.o
00:03:56.374    CXX test/cpp_headers/keyring.o
00:03:56.374    CXX test/cpp_headers/keyring_module.o
00:03:56.374    CXX test/cpp_headers/likely.o
00:03:56.374    CXX test/cpp_headers/log.o
00:03:56.374    LINK spdk_dd
00:03:56.374    CXX test/cpp_headers/lvol.o
00:03:56.374    CXX test/cpp_headers/md5.o
00:03:56.374    CXX test/cpp_headers/memory.o
00:03:56.374    CXX test/cpp_headers/mmio.o
00:03:56.374    CXX test/cpp_headers/nbd.o
00:03:56.374    CXX test/cpp_headers/net.o
00:03:56.374    CXX test/cpp_headers/notify.o
00:03:56.374    CXX test/cpp_headers/nvme.o
00:03:56.374    CXX test/cpp_headers/nvme_intel.o
00:03:56.374    CXX test/cpp_headers/nvme_ocssd.o
00:03:56.374    CXX test/cpp_headers/nvme_ocssd_spec.o
00:03:56.374    CXX test/cpp_headers/nvme_spec.o
00:03:56.374    CXX test/cpp_headers/nvme_zns.o
00:03:56.374    CXX test/cpp_headers/nvmf_cmd.o
00:03:56.374    CXX test/cpp_headers/nvmf_fc_spec.o
00:03:56.374    CXX test/cpp_headers/nvmf.o
00:03:56.374    LINK pci_ut
00:03:56.635    CXX test/cpp_headers/nvmf_spec.o
00:03:56.635    CXX test/cpp_headers/nvmf_transport.o
00:03:56.635    CXX test/cpp_headers/opal.o
00:03:56.635    CXX test/cpp_headers/opal_spec.o
00:03:56.635    CC test/event/event_perf/event_perf.o
00:03:56.635    CC examples/sock/hello_world/hello_sock.o
00:03:56.635    CC examples/vmd/lsvmd/lsvmd.o
00:03:56.635    CC examples/idxd/perf/perf.o
00:03:56.635    CC test/event/reactor/reactor.o
00:03:56.635    CXX test/cpp_headers/pci_ids.o
00:03:56.635    CC examples/thread/thread/thread_ex.o
00:03:56.635    LINK test_dma
00:03:56.635    LINK nvme_fuzz
00:03:56.635    CXX test/cpp_headers/pipe.o
00:03:56.635    CXX test/cpp_headers/queue.o
00:03:56.635    LINK spdk_bdev
00:03:56.897    CC test/event/reactor_perf/reactor_perf.o
00:03:56.897    CXX test/cpp_headers/reduce.o
00:03:56.897    CXX test/cpp_headers/rpc.o
00:03:56.897    CXX test/cpp_headers/scheduler.o
00:03:56.897    CXX test/cpp_headers/scsi.o
00:03:56.897    CXX test/cpp_headers/scsi_spec.o
00:03:56.897    CXX test/cpp_headers/sock.o
00:03:56.897    CXX test/cpp_headers/stdinc.o
00:03:56.897    CXX test/cpp_headers/string.o
00:03:56.897    CXX test/cpp_headers/thread.o
00:03:56.897    CXX test/cpp_headers/trace.o
00:03:56.897    CC test/event/app_repeat/app_repeat.o
00:03:56.897    CXX test/cpp_headers/trace_parser.o
00:03:56.897    LINK mem_callbacks
00:03:56.897    CXX test/cpp_headers/tree.o
00:03:56.897    CXX test/cpp_headers/ublk.o
00:03:56.897    CXX test/cpp_headers/util.o
00:03:56.897    CC examples/vmd/led/led.o
00:03:56.897    CXX test/cpp_headers/uuid.o
00:03:56.897    CC test/event/scheduler/scheduler.o
00:03:56.897    CC app/vhost/vhost.o
00:03:56.897    CXX test/cpp_headers/version.o
00:03:56.897    CXX test/cpp_headers/vfio_user_pci.o
00:03:56.897    CXX test/cpp_headers/vfio_user_spec.o
00:03:56.897    LINK lsvmd
00:03:56.897    LINK spdk_nvme
00:03:56.897    CXX test/cpp_headers/vhost.o
00:03:56.897    CXX test/cpp_headers/vmd.o
00:03:56.897    LINK event_perf
00:03:56.897    CXX test/cpp_headers/xor.o
00:03:56.897    CXX test/cpp_headers/zipf.o
00:03:56.897    LINK reactor
00:03:57.158    LINK reactor_perf
00:03:57.158    LINK app_repeat
00:03:57.158    LINK vhost_fuzz
00:03:57.158    LINK led
00:03:57.158    LINK hello_sock
00:03:57.158    LINK thread
00:03:57.416    LINK vhost
00:03:57.416    LINK spdk_nvme_perf
00:03:57.416    CC test/nvme/e2edp/nvme_dp.o
00:03:57.416    LINK idxd_perf
00:03:57.416    CC test/nvme/boot_partition/boot_partition.o
00:03:57.416    CC test/nvme/compliance/nvme_compliance.o
00:03:57.416    CC test/nvme/overhead/overhead.o
00:03:57.416    CC test/nvme/connect_stress/connect_stress.o
00:03:57.416    CC test/nvme/reserve/reserve.o
00:03:57.416    CC test/nvme/aer/aer.o
00:03:57.416    LINK scheduler
00:03:57.416    CC test/nvme/reset/reset.o
00:03:57.416    CC test/nvme/fused_ordering/fused_ordering.o
00:03:57.416    CC test/nvme/simple_copy/simple_copy.o
00:03:57.416    CC test/nvme/startup/startup.o
00:03:57.416    CC test/nvme/cuse/cuse.o
00:03:57.416    CC test/nvme/doorbell_aers/doorbell_aers.o
00:03:57.416    CC test/nvme/fdp/fdp.o
00:03:57.416    CC test/nvme/sgl/sgl.o
00:03:57.416    CC test/nvme/err_injection/err_injection.o
00:03:57.416    LINK spdk_nvme_identify
00:03:57.416    CC test/blobfs/mkfs/mkfs.o
00:03:57.416    LINK spdk_top
00:03:57.416    CC test/accel/dif/dif.o
00:03:57.416    CC test/lvol/esnap/esnap.o
00:03:57.675    LINK startup
00:03:57.675    CC examples/nvme/reconnect/reconnect.o
00:03:57.675    CC examples/nvme/arbitration/arbitration.o
00:03:57.675    CC examples/nvme/nvme_manage/nvme_manage.o
00:03:57.675    CC examples/nvme/abort/abort.o
00:03:57.675    CC examples/nvme/hello_world/hello_world.o
00:03:57.675    CC examples/nvme/cmb_copy/cmb_copy.o
00:03:57.675    CC examples/nvme/hotplug/hotplug.o
00:03:57.675    CC examples/nvme/pmr_persistence/pmr_persistence.o
00:03:57.675    LINK doorbell_aers
00:03:57.675    LINK fused_ordering
00:03:57.675    LINK boot_partition
00:03:57.675    LINK connect_stress
00:03:57.675    LINK simple_copy
00:03:57.933    CC examples/accel/perf/accel_perf.o
00:03:57.933    LINK err_injection
00:03:57.933    LINK reserve
00:03:57.933    CC examples/fsdev/hello_world/hello_fsdev.o
00:03:57.933    LINK sgl
00:03:57.933    CC examples/blob/hello_world/hello_blob.o
00:03:57.933    CC examples/blob/cli/blobcli.o
00:03:57.933    LINK mkfs
00:03:57.933    LINK reset
00:03:57.933    LINK overhead
00:03:57.933    LINK cmb_copy
00:03:57.933    LINK pmr_persistence
00:03:57.933    LINK nvme_dp
00:03:57.933    LINK memory_ut
00:03:57.933    LINK aer
00:03:57.933    LINK nvme_compliance
00:03:58.192    LINK fdp
00:03:58.192    LINK hotplug
00:03:58.192    LINK hello_world
00:03:58.192    LINK arbitration
00:03:58.192    LINK reconnect
00:03:58.451    LINK hello_blob
00:03:58.451    LINK hello_fsdev
00:03:58.451    LINK abort
00:03:58.451    LINK nvme_manage
00:03:58.709    LINK blobcli
00:03:58.709    LINK accel_perf
00:03:58.709    LINK dif
00:03:58.967    CC examples/bdev/hello_world/hello_bdev.o
00:03:58.967    CC examples/bdev/bdevperf/bdevperf.o
00:03:59.225    CC test/bdev/bdevio/bdevio.o
00:03:59.225    LINK iscsi_fuzz
00:03:59.483    LINK hello_bdev
00:03:59.483    LINK cuse
00:03:59.483    LINK bdevio
00:04:00.049    LINK bdevperf
00:04:00.614    CC examples/nvmf/nvmf/nvmf.o
00:04:00.872    LINK nvmf
00:04:05.061    LINK esnap
00:04:05.061  
00:04:05.061  real	1m59.109s
00:04:05.061  user	26m15.992s
00:04:05.061  sys	3m26.771s
00:04:05.061   10:00:59 make -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:04:05.061   10:00:59 make -- common/autotest_common.sh@10 -- $ set +x
00:04:05.061  ************************************
00:04:05.061  END TEST make
00:04:05.061  ************************************
00:04:05.061   10:00:59  -- spdk/autobuild.sh@1 -- $ stop_monitor_resources
00:04:05.061   10:00:59  -- pm/common@29 -- $ signal_monitor_resources TERM
00:04:05.061   10:00:59  -- pm/common@40 -- $ local monitor pid pids signal=TERM
00:04:05.061   10:00:59  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:04:05.061   10:00:59  -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]]
00:04:05.061   10:00:59  -- pm/common@44 -- $ pid=1653329
00:04:05.061   10:00:59  -- pm/common@50 -- $ kill -TERM 1653329
00:04:05.061   10:00:59  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:04:05.061   10:00:59  -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/collect-vmstat.pid ]]
00:04:05.061   10:00:59  -- pm/common@44 -- $ pid=1653331
00:04:05.061   10:00:59  -- pm/common@50 -- $ kill -TERM 1653331
00:04:05.061   10:00:59  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:04:05.061   10:00:59  -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]]
00:04:05.061   10:00:59  -- pm/common@44 -- $ pid=1653333
00:04:05.061   10:00:59  -- pm/common@50 -- $ kill -TERM 1653333
00:04:05.061   10:00:59  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:04:05.061   10:00:59  -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]]
00:04:05.061   10:00:59  -- pm/common@44 -- $ pid=1653362
00:04:05.061   10:00:59  -- pm/common@50 -- $ sudo -E kill -TERM 1653362
00:04:05.061   10:00:59  -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 ))
00:04:05.061   10:00:59  -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/vfio-user-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/vfio-user-phy-autotest/autorun-spdk.conf
00:04:05.061    10:00:59  -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:04:05.061     10:00:59  -- common/autotest_common.sh@1693 -- # lcov --version
00:04:05.061     10:00:59  -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:04:05.061    10:00:59  -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:04:05.061    10:00:59  -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:05.061    10:00:59  -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:05.061    10:00:59  -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:05.061    10:00:59  -- scripts/common.sh@336 -- # IFS=.-:
00:04:05.061    10:00:59  -- scripts/common.sh@336 -- # read -ra ver1
00:04:05.061    10:00:59  -- scripts/common.sh@337 -- # IFS=.-:
00:04:05.061    10:00:59  -- scripts/common.sh@337 -- # read -ra ver2
00:04:05.061    10:00:59  -- scripts/common.sh@338 -- # local 'op=<'
00:04:05.061    10:00:59  -- scripts/common.sh@340 -- # ver1_l=2
00:04:05.061    10:00:59  -- scripts/common.sh@341 -- # ver2_l=1
00:04:05.061    10:00:59  -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:05.061    10:00:59  -- scripts/common.sh@344 -- # case "$op" in
00:04:05.061    10:00:59  -- scripts/common.sh@345 -- # : 1
00:04:05.061    10:00:59  -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:05.061    10:00:59  -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:05.061     10:00:59  -- scripts/common.sh@365 -- # decimal 1
00:04:05.061     10:00:59  -- scripts/common.sh@353 -- # local d=1
00:04:05.061     10:00:59  -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:05.061     10:00:59  -- scripts/common.sh@355 -- # echo 1
00:04:05.061    10:00:59  -- scripts/common.sh@365 -- # ver1[v]=1
00:04:05.061     10:00:59  -- scripts/common.sh@366 -- # decimal 2
00:04:05.061     10:00:59  -- scripts/common.sh@353 -- # local d=2
00:04:05.061     10:00:59  -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:05.061     10:00:59  -- scripts/common.sh@355 -- # echo 2
00:04:05.061    10:00:59  -- scripts/common.sh@366 -- # ver2[v]=2
00:04:05.061    10:00:59  -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:05.061    10:00:59  -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:05.061    10:00:59  -- scripts/common.sh@368 -- # return 0
00:04:05.061    10:00:59  -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:05.061    10:00:59  -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:04:05.061  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:05.061  		--rc genhtml_branch_coverage=1
00:04:05.061  		--rc genhtml_function_coverage=1
00:04:05.061  		--rc genhtml_legend=1
00:04:05.061  		--rc geninfo_all_blocks=1
00:04:05.061  		--rc geninfo_unexecuted_blocks=1
00:04:05.061  		
00:04:05.061  		'
00:04:05.061    10:00:59  -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:04:05.061  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:05.061  		--rc genhtml_branch_coverage=1
00:04:05.061  		--rc genhtml_function_coverage=1
00:04:05.061  		--rc genhtml_legend=1
00:04:05.061  		--rc geninfo_all_blocks=1
00:04:05.061  		--rc geninfo_unexecuted_blocks=1
00:04:05.061  		
00:04:05.061  		'
00:04:05.061    10:00:59  -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:04:05.062  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:05.062  		--rc genhtml_branch_coverage=1
00:04:05.062  		--rc genhtml_function_coverage=1
00:04:05.062  		--rc genhtml_legend=1
00:04:05.062  		--rc geninfo_all_blocks=1
00:04:05.062  		--rc geninfo_unexecuted_blocks=1
00:04:05.062  		
00:04:05.062  		'
00:04:05.062    10:00:59  -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:04:05.062  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:05.062  		--rc genhtml_branch_coverage=1
00:04:05.062  		--rc genhtml_function_coverage=1
00:04:05.062  		--rc genhtml_legend=1
00:04:05.062  		--rc geninfo_all_blocks=1
00:04:05.062  		--rc geninfo_unexecuted_blocks=1
00:04:05.062  		
00:04:05.062  		'
00:04:05.062   10:00:59  -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/nvmf/common.sh
00:04:05.062     10:00:59  -- nvmf/common.sh@7 -- # uname -s
00:04:05.062    10:00:59  -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:04:05.062    10:00:59  -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:04:05.062    10:00:59  -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:04:05.062    10:00:59  -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:04:05.062    10:00:59  -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:04:05.062    10:00:59  -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:04:05.062    10:00:59  -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:04:05.062    10:00:59  -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:04:05.062    10:00:59  -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:04:05.062     10:00:59  -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:04:05.062    10:00:59  -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92072e00-b2cb-e211-b423-001e67898f4e
00:04:05.062    10:00:59  -- nvmf/common.sh@18 -- # NVME_HOSTID=92072e00-b2cb-e211-b423-001e67898f4e
00:04:05.062    10:00:59  -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:04:05.062    10:00:59  -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:04:05.062    10:00:59  -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:04:05.062    10:00:59  -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:04:05.062    10:00:59  -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/common.sh
00:04:05.062     10:00:59  -- scripts/common.sh@15 -- # shopt -s extglob
00:04:05.062     10:00:59  -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:04:05.062     10:00:59  -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:04:05.062     10:00:59  -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:04:05.062      10:00:59  -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:05.062      10:00:59  -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:05.062      10:00:59  -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:05.062      10:00:59  -- paths/export.sh@5 -- # export PATH
00:04:05.062      10:00:59  -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:05.062    10:00:59  -- nvmf/common.sh@51 -- # : 0
00:04:05.062    10:00:59  -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:04:05.062    10:00:59  -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:04:05.062    10:00:59  -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:04:05.062    10:00:59  -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:04:05.062    10:00:59  -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:04:05.062    10:00:59  -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:04:05.062  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:04:05.062    10:00:59  -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:04:05.062    10:00:59  -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:04:05.062    10:00:59  -- nvmf/common.sh@55 -- # have_pci_nics=0
00:04:05.062   10:00:59  -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']'
00:04:05.062    10:00:59  -- spdk/autotest.sh@32 -- # uname -s
00:04:05.062   10:00:59  -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']'
00:04:05.062   10:00:59  -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h'
00:04:05.062   10:00:59  -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/coredumps
00:04:05.062   10:00:59  -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/core-collector.sh %P %s %t'
00:04:05.062   10:00:59  -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/coredumps
00:04:05.062   10:00:59  -- spdk/autotest.sh@44 -- # modprobe nbd
00:04:05.062    10:00:59  -- spdk/autotest.sh@46 -- # type -P udevadm
00:04:05.062   10:00:59  -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm
00:04:05.062   10:00:59  -- spdk/autotest.sh@48 -- # udevadm_pid=1722325
00:04:05.062   10:00:59  -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property
00:04:05.062   10:00:59  -- spdk/autotest.sh@53 -- # start_monitor_resources
00:04:05.062   10:00:59  -- pm/common@17 -- # local monitor
00:04:05.062   10:00:59  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:04:05.062   10:00:59  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:04:05.062   10:00:59  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:04:05.062    10:00:59  -- pm/common@21 -- # date +%s
00:04:05.062   10:00:59  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:04:05.062    10:00:59  -- pm/common@21 -- # date +%s
00:04:05.062   10:00:59  -- pm/common@25 -- # sleep 1
00:04:05.062    10:00:59  -- pm/common@21 -- # date +%s
00:04:05.062    10:00:59  -- pm/common@21 -- # date +%s
00:04:05.062   10:00:59  -- pm/common@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732093259
00:04:05.062   10:00:59  -- pm/common@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732093259
00:04:05.062   10:00:59  -- pm/common@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732093259
00:04:05.062   10:00:59  -- pm/common@21 -- # sudo -E /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732093259
00:04:05.062  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732093259_collect-cpu-load.pm.log
00:04:05.062  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732093259_collect-vmstat.pm.log
00:04:05.062  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732093259_collect-cpu-temp.pm.log
00:04:05.062  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732093259_collect-bmc-pm.bmc.pm.log
00:04:06.002   10:01:00  -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT
00:04:06.002   10:01:00  -- spdk/autotest.sh@57 -- # timing_enter autotest
00:04:06.002   10:01:00  -- common/autotest_common.sh@726 -- # xtrace_disable
00:04:06.002   10:01:00  -- common/autotest_common.sh@10 -- # set +x
00:04:06.002   10:01:00  -- spdk/autotest.sh@59 -- # create_test_list
00:04:06.002   10:01:00  -- common/autotest_common.sh@752 -- # xtrace_disable
00:04:06.002   10:01:00  -- common/autotest_common.sh@10 -- # set +x
00:04:06.002     10:01:00  -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/autotest.sh
00:04:06.002    10:01:00  -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:04:06.002   10:01:00  -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:04:06.002   10:01:00  -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output
00:04:06.002   10:01:00  -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:04:06.002   10:01:00  -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod
00:04:06.002    10:01:00  -- common/autotest_common.sh@1457 -- # uname
00:04:06.002   10:01:00  -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']'
00:04:06.002   10:01:00  -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf
00:04:06.002    10:01:00  -- common/autotest_common.sh@1477 -- # uname
00:04:06.002   10:01:00  -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]]
00:04:06.002   10:01:00  -- spdk/autotest.sh@68 -- # [[ y == y ]]
00:04:06.002   10:01:00  -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version
00:04:06.002  lcov: LCOV version 1.15
00:04:06.002   10:01:01  -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_base.info
00:04:24.186  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found
00:04:24.186  geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno
00:04:46.197   10:01:38  -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup
00:04:46.198   10:01:38  -- common/autotest_common.sh@726 -- # xtrace_disable
00:04:46.198   10:01:38  -- common/autotest_common.sh@10 -- # set +x
00:04:46.198   10:01:38  -- spdk/autotest.sh@78 -- # rm -f
00:04:46.198   10:01:38  -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh reset
00:04:46.198  0000:00:04.7 (8086 0e27): Already using the ioatdma driver
00:04:46.198  0000:00:04.6 (8086 0e26): Already using the ioatdma driver
00:04:46.198  0000:00:04.5 (8086 0e25): Already using the ioatdma driver
00:04:46.198  0000:00:04.4 (8086 0e24): Already using the ioatdma driver
00:04:46.198  0000:00:04.3 (8086 0e23): Already using the ioatdma driver
00:04:46.198  0000:00:04.2 (8086 0e22): Already using the ioatdma driver
00:04:46.198  0000:00:04.1 (8086 0e21): Already using the ioatdma driver
00:04:46.198  0000:00:04.0 (8086 0e20): Already using the ioatdma driver
00:04:46.198  0000:80:04.7 (8086 0e27): Already using the ioatdma driver
00:04:46.198  0000:80:04.6 (8086 0e26): Already using the ioatdma driver
00:04:46.198  0000:80:04.5 (8086 0e25): Already using the ioatdma driver
00:04:46.198  0000:80:04.4 (8086 0e24): Already using the ioatdma driver
00:04:46.198  0000:80:04.3 (8086 0e23): Already using the ioatdma driver
00:04:46.198  0000:80:04.2 (8086 0e22): Already using the ioatdma driver
00:04:46.198  0000:80:04.1 (8086 0e21): Already using the ioatdma driver
00:04:46.198  0000:80:04.0 (8086 0e20): Already using the ioatdma driver
00:04:46.198  0000:85:00.0 (8086 0a54): Already using the nvme driver
00:04:46.198   10:01:39  -- spdk/autotest.sh@83 -- # get_zoned_devs
00:04:46.198   10:01:39  -- common/autotest_common.sh@1657 -- # zoned_devs=()
00:04:46.198   10:01:39  -- common/autotest_common.sh@1657 -- # local -gA zoned_devs
00:04:46.198   10:01:39  -- common/autotest_common.sh@1658 -- # local nvme bdf
00:04:46.198   10:01:39  -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme*
00:04:46.198   10:01:39  -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1
00:04:46.198   10:01:39  -- common/autotest_common.sh@1650 -- # local device=nvme0n1
00:04:46.198   10:01:39  -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:04:46.198   10:01:39  -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:04:46.198   10:01:39  -- spdk/autotest.sh@85 -- # (( 0 > 0 ))
00:04:46.198   10:01:39  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:04:46.198   10:01:39  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:04:46.198   10:01:39  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1
00:04:46.198   10:01:39  -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt
00:04:46.198   10:01:39  -- scripts/common.sh@390 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1
00:04:46.198  No valid GPT data, bailing
00:04:46.198    10:01:39  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:04:46.198   10:01:39  -- scripts/common.sh@394 -- # pt=
00:04:46.198   10:01:39  -- scripts/common.sh@395 -- # return 1
00:04:46.198   10:01:39  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1
00:04:46.198  1+0 records in
00:04:46.198  1+0 records out
00:04:46.198  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00179448 s, 584 MB/s
00:04:46.198   10:01:39  -- spdk/autotest.sh@105 -- # sync
00:04:46.198   10:01:39  -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes
00:04:46.198   10:01:39  -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null'
00:04:46.198    10:01:39  -- common/autotest_common.sh@22 -- # reap_spdk_processes
00:04:46.763    10:01:41  -- spdk/autotest.sh@111 -- # uname -s
00:04:46.763   10:01:41  -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]]
00:04:46.763   10:01:41  -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]]
00:04:46.763   10:01:41  -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh status
00:04:48.139  Hugepages
00:04:48.139  node     hugesize     free /  total
00:04:48.139  node0   1048576kB        0 /      0
00:04:48.139  node0      2048kB        0 /      0
00:04:48.139  node1   1048576kB        0 /      0
00:04:48.139  node1      2048kB        0 /      0
00:04:48.139  
00:04:48.139  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:04:48.140  I/OAT                     0000:00:04.0    8086   0e20   0       ioatdma          -          -
00:04:48.140  I/OAT                     0000:00:04.1    8086   0e21   0       ioatdma          -          -
00:04:48.140  I/OAT                     0000:00:04.2    8086   0e22   0       ioatdma          -          -
00:04:48.140  I/OAT                     0000:00:04.3    8086   0e23   0       ioatdma          -          -
00:04:48.140  I/OAT                     0000:00:04.4    8086   0e24   0       ioatdma          -          -
00:04:48.140  I/OAT                     0000:00:04.5    8086   0e25   0       ioatdma          -          -
00:04:48.140  I/OAT                     0000:00:04.6    8086   0e26   0       ioatdma          -          -
00:04:48.140  I/OAT                     0000:00:04.7    8086   0e27   0       ioatdma          -          -
00:04:48.140  I/OAT                     0000:80:04.0    8086   0e20   1       ioatdma          -          -
00:04:48.140  I/OAT                     0000:80:04.1    8086   0e21   1       ioatdma          -          -
00:04:48.140  I/OAT                     0000:80:04.2    8086   0e22   1       ioatdma          -          -
00:04:48.140  I/OAT                     0000:80:04.3    8086   0e23   1       ioatdma          -          -
00:04:48.140  I/OAT                     0000:80:04.4    8086   0e24   1       ioatdma          -          -
00:04:48.140  I/OAT                     0000:80:04.5    8086   0e25   1       ioatdma          -          -
00:04:48.140  I/OAT                     0000:80:04.6    8086   0e26   1       ioatdma          -          -
00:04:48.140  I/OAT                     0000:80:04.7    8086   0e27   1       ioatdma          -          -
00:04:48.140  NVMe                      0000:85:00.0    8086   0a54   1       nvme             nvme0      nvme0n1
00:04:48.140    10:01:43  -- spdk/autotest.sh@117 -- # uname -s
00:04:48.140   10:01:43  -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]]
00:04:48.140   10:01:43  -- spdk/autotest.sh@119 -- # nvme_namespace_revert
00:04:48.140   10:01:43  -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh
00:04:49.519  0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci
00:04:49.519  0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci
00:04:49.519  0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci
00:04:49.519  0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci
00:04:49.519  0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci
00:04:49.519  0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci
00:04:49.519  0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci
00:04:49.519  0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci
00:04:49.519  0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci
00:04:49.519  0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci
00:04:49.519  0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci
00:04:49.519  0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci
00:04:49.519  0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci
00:04:49.519  0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci
00:04:49.519  0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci
00:04:49.519  0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci
00:04:50.457  0000:85:00.0 (8086 0a54): nvme -> vfio-pci
00:04:50.715   10:01:45  -- common/autotest_common.sh@1517 -- # sleep 1
00:04:51.653   10:01:46  -- common/autotest_common.sh@1518 -- # bdfs=()
00:04:51.653   10:01:46  -- common/autotest_common.sh@1518 -- # local bdfs
00:04:51.653   10:01:46  -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs))
00:04:51.653    10:01:46  -- common/autotest_common.sh@1520 -- # get_nvme_bdfs
00:04:51.653    10:01:46  -- common/autotest_common.sh@1498 -- # bdfs=()
00:04:51.653    10:01:46  -- common/autotest_common.sh@1498 -- # local bdfs
00:04:51.653    10:01:46  -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:04:51.653     10:01:46  -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/gen_nvme.sh
00:04:51.653     10:01:46  -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:04:51.653    10:01:46  -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:04:51.653    10:01:46  -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:85:00.0
00:04:51.653   10:01:46  -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh reset
00:04:53.026  Waiting for block devices as requested
00:04:53.026  0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma
00:04:53.026  0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma
00:04:53.026  0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma
00:04:53.286  0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma
00:04:53.286  0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma
00:04:53.286  0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma
00:04:53.544  0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma
00:04:53.544  0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma
00:04:53.544  0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma
00:04:53.544  0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma
00:04:53.803  0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma
00:04:53.803  0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma
00:04:53.803  0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma
00:04:53.803  0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma
00:04:54.062  0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma
00:04:54.062  0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma
00:04:54.062  0000:85:00.0 (8086 0a54): vfio-pci -> nvme
00:04:54.320   10:01:49  -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}"
00:04:54.320    10:01:49  -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:85:00.0
00:04:54.320     10:01:49  -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0
00:04:54.320     10:01:49  -- common/autotest_common.sh@1487 -- # grep 0000:85:00.0/nvme/nvme
00:04:54.320    10:01:49  -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:85:00.0/nvme/nvme0
00:04:54.320    10:01:49  -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:85:00.0/nvme/nvme0 ]]
00:04:54.320     10:01:49  -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:85:00.0/nvme/nvme0
00:04:54.320    10:01:49  -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0
00:04:54.320   10:01:49  -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0
00:04:54.320   10:01:49  -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]]
00:04:54.320    10:01:49  -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0
00:04:54.320    10:01:49  -- common/autotest_common.sh@1531 -- # grep oacs
00:04:54.320    10:01:49  -- common/autotest_common.sh@1531 -- # cut -d: -f2
00:04:54.320   10:01:49  -- common/autotest_common.sh@1531 -- # oacs=' 0xf'
00:04:54.320   10:01:49  -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8
00:04:54.320   10:01:49  -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]]
00:04:54.320    10:01:49  -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0
00:04:54.320    10:01:49  -- common/autotest_common.sh@1540 -- # grep unvmcap
00:04:54.320    10:01:49  -- common/autotest_common.sh@1540 -- # cut -d: -f2
00:04:54.320   10:01:49  -- common/autotest_common.sh@1540 -- # unvmcap=' 0'
00:04:54.320   10:01:49  -- common/autotest_common.sh@1541 -- # [[  0 -eq 0 ]]
00:04:54.320   10:01:49  -- common/autotest_common.sh@1543 -- # continue
00:04:54.320   10:01:49  -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup
00:04:54.320   10:01:49  -- common/autotest_common.sh@732 -- # xtrace_disable
00:04:54.320   10:01:49  -- common/autotest_common.sh@10 -- # set +x
00:04:54.320   10:01:49  -- spdk/autotest.sh@125 -- # timing_enter afterboot
00:04:54.320   10:01:49  -- common/autotest_common.sh@726 -- # xtrace_disable
00:04:54.320   10:01:49  -- common/autotest_common.sh@10 -- # set +x
00:04:54.320   10:01:49  -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh
00:04:55.698  0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci
00:04:55.698  0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci
00:04:55.698  0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci
00:04:55.698  0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci
00:04:55.698  0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci
00:04:55.698  0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci
00:04:55.698  0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci
00:04:55.698  0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci
00:04:55.698  0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci
00:04:55.698  0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci
00:04:55.698  0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci
00:04:55.698  0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci
00:04:55.698  0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci
00:04:55.698  0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci
00:04:55.698  0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci
00:04:55.958  0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci
00:04:56.896  0000:85:00.0 (8086 0a54): nvme -> vfio-pci
00:04:56.896   10:01:51  -- spdk/autotest.sh@127 -- # timing_exit afterboot
00:04:56.896   10:01:51  -- common/autotest_common.sh@732 -- # xtrace_disable
00:04:56.896   10:01:51  -- common/autotest_common.sh@10 -- # set +x
00:04:57.153   10:01:52  -- spdk/autotest.sh@131 -- # opal_revert_cleanup
00:04:57.153   10:01:52  -- common/autotest_common.sh@1578 -- # mapfile -t bdfs
00:04:57.153    10:01:52  -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54
00:04:57.153    10:01:52  -- common/autotest_common.sh@1563 -- # bdfs=()
00:04:57.153    10:01:52  -- common/autotest_common.sh@1563 -- # _bdfs=()
00:04:57.153    10:01:52  -- common/autotest_common.sh@1563 -- # local bdfs _bdfs
00:04:57.153    10:01:52  -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs))
00:04:57.153     10:01:52  -- common/autotest_common.sh@1564 -- # get_nvme_bdfs
00:04:57.153     10:01:52  -- common/autotest_common.sh@1498 -- # bdfs=()
00:04:57.153     10:01:52  -- common/autotest_common.sh@1498 -- # local bdfs
00:04:57.153     10:01:52  -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:04:57.153      10:01:52  -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/gen_nvme.sh
00:04:57.153      10:01:52  -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:04:57.153     10:01:52  -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:04:57.153     10:01:52  -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:85:00.0
00:04:57.154    10:01:52  -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}"
00:04:57.154     10:01:52  -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:85:00.0/device
00:04:57.154    10:01:52  -- common/autotest_common.sh@1566 -- # device=0x0a54
00:04:57.154    10:01:52  -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]]
00:04:57.154    10:01:52  -- common/autotest_common.sh@1568 -- # bdfs+=($bdf)
00:04:57.154    10:01:52  -- common/autotest_common.sh@1572 -- # (( 1 > 0 ))
00:04:57.154    10:01:52  -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:85:00.0
00:04:57.154   10:01:52  -- common/autotest_common.sh@1579 -- # [[ -z 0000:85:00.0 ]]
00:04:57.154   10:01:52  -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=1733015
00:04:57.154   10:01:52  -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:04:57.154   10:01:52  -- common/autotest_common.sh@1585 -- # waitforlisten 1733015
00:04:57.154   10:01:52  -- common/autotest_common.sh@835 -- # '[' -z 1733015 ']'
00:04:57.154   10:01:52  -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:04:57.154   10:01:52  -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:57.154   10:01:52  -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:04:57.154  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:04:57.154   10:01:52  -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:57.154   10:01:52  -- common/autotest_common.sh@10 -- # set +x
00:04:57.154  [2024-11-20 10:01:52.200868] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:04:57.154  [2024-11-20 10:01:52.200997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1733015 ]
00:04:57.411  [2024-11-20 10:01:52.339170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:57.411  [2024-11-20 10:01:52.453555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:58.346   10:01:53  -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:58.346   10:01:53  -- common/autotest_common.sh@868 -- # return 0
00:04:58.346   10:01:53  -- common/autotest_common.sh@1587 -- # bdf_id=0
00:04:58.346   10:01:53  -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}"
00:04:58.346   10:01:53  -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:85:00.0
00:05:01.632  nvme0n1
00:05:01.632   10:01:56  -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test
00:05:01.632  [2024-11-20 10:01:56.659860] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18
00:05:01.632  [2024-11-20 10:01:56.659925] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18
00:05:01.632  request:
00:05:01.632  {
00:05:01.632    "nvme_ctrlr_name": "nvme0",
00:05:01.632    "password": "test",
00:05:01.632    "method": "bdev_nvme_opal_revert",
00:05:01.632    "req_id": 1
00:05:01.632  }
00:05:01.632  Got JSON-RPC error response
00:05:01.632  response:
00:05:01.632  {
00:05:01.632    "code": -32603,
00:05:01.632    "message": "Internal error"
00:05:01.632  }
00:05:01.632   10:01:56  -- common/autotest_common.sh@1591 -- # true
00:05:01.632   10:01:56  -- common/autotest_common.sh@1592 -- # (( ++bdf_id ))
00:05:01.632   10:01:56  -- common/autotest_common.sh@1595 -- # killprocess 1733015
00:05:01.632   10:01:56  -- common/autotest_common.sh@954 -- # '[' -z 1733015 ']'
00:05:01.632   10:01:56  -- common/autotest_common.sh@958 -- # kill -0 1733015
00:05:01.632    10:01:56  -- common/autotest_common.sh@959 -- # uname
00:05:01.632   10:01:56  -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:01.632    10:01:56  -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1733015
00:05:01.632   10:01:56  -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:01.632   10:01:56  -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:01.632   10:01:56  -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1733015'
00:05:01.632  killing process with pid 1733015
00:05:01.632   10:01:56  -- common/autotest_common.sh@973 -- # kill 1733015
00:05:01.632   10:01:56  -- common/autotest_common.sh@978 -- # wait 1733015
00:05:04.917   10:01:59  -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']'
00:05:04.917   10:01:59  -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']'
00:05:04.917   10:01:59  -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]]
00:05:04.917   10:01:59  -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]]
00:05:04.917   10:01:59  -- spdk/autotest.sh@149 -- # timing_enter lib
00:05:04.917   10:01:59  -- common/autotest_common.sh@726 -- # xtrace_disable
00:05:04.917   10:01:59  -- common/autotest_common.sh@10 -- # set +x
00:05:04.917   10:01:59  -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]]
00:05:04.917   10:01:59  -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/env.sh
00:05:04.917   10:01:59  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:04.917   10:01:59  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:04.917   10:01:59  -- common/autotest_common.sh@10 -- # set +x
00:05:04.917  ************************************
00:05:04.917  START TEST env
00:05:04.917  ************************************
00:05:04.917   10:01:59 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/env.sh
00:05:05.175  * Looking for test storage...
00:05:05.176  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env
00:05:05.176    10:02:00 env -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:05:05.176     10:02:00 env -- common/autotest_common.sh@1693 -- # lcov --version
00:05:05.176     10:02:00 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:05:05.176    10:02:00 env -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:05:05.176    10:02:00 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:05.176    10:02:00 env -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:05.176    10:02:00 env -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:05.176    10:02:00 env -- scripts/common.sh@336 -- # IFS=.-:
00:05:05.176    10:02:00 env -- scripts/common.sh@336 -- # read -ra ver1
00:05:05.176    10:02:00 env -- scripts/common.sh@337 -- # IFS=.-:
00:05:05.176    10:02:00 env -- scripts/common.sh@337 -- # read -ra ver2
00:05:05.176    10:02:00 env -- scripts/common.sh@338 -- # local 'op=<'
00:05:05.176    10:02:00 env -- scripts/common.sh@340 -- # ver1_l=2
00:05:05.176    10:02:00 env -- scripts/common.sh@341 -- # ver2_l=1
00:05:05.176    10:02:00 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:05.176    10:02:00 env -- scripts/common.sh@344 -- # case "$op" in
00:05:05.176    10:02:00 env -- scripts/common.sh@345 -- # : 1
00:05:05.176    10:02:00 env -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:05.176    10:02:00 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:05.176     10:02:00 env -- scripts/common.sh@365 -- # decimal 1
00:05:05.176     10:02:00 env -- scripts/common.sh@353 -- # local d=1
00:05:05.176     10:02:00 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:05.176     10:02:00 env -- scripts/common.sh@355 -- # echo 1
00:05:05.176    10:02:00 env -- scripts/common.sh@365 -- # ver1[v]=1
00:05:05.176     10:02:00 env -- scripts/common.sh@366 -- # decimal 2
00:05:05.176     10:02:00 env -- scripts/common.sh@353 -- # local d=2
00:05:05.176     10:02:00 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:05.176     10:02:00 env -- scripts/common.sh@355 -- # echo 2
00:05:05.176    10:02:00 env -- scripts/common.sh@366 -- # ver2[v]=2
00:05:05.176    10:02:00 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:05.176    10:02:00 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:05.176    10:02:00 env -- scripts/common.sh@368 -- # return 0
00:05:05.176    10:02:00 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:05.176    10:02:00 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:05:05.176  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:05.176  		--rc genhtml_branch_coverage=1
00:05:05.176  		--rc genhtml_function_coverage=1
00:05:05.176  		--rc genhtml_legend=1
00:05:05.176  		--rc geninfo_all_blocks=1
00:05:05.176  		--rc geninfo_unexecuted_blocks=1
00:05:05.176  		
00:05:05.176  		'
00:05:05.176    10:02:00 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:05:05.176  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:05.176  		--rc genhtml_branch_coverage=1
00:05:05.176  		--rc genhtml_function_coverage=1
00:05:05.176  		--rc genhtml_legend=1
00:05:05.176  		--rc geninfo_all_blocks=1
00:05:05.176  		--rc geninfo_unexecuted_blocks=1
00:05:05.176  		
00:05:05.176  		'
00:05:05.176    10:02:00 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:05:05.176  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:05.176  		--rc genhtml_branch_coverage=1
00:05:05.176  		--rc genhtml_function_coverage=1
00:05:05.176  		--rc genhtml_legend=1
00:05:05.176  		--rc geninfo_all_blocks=1
00:05:05.176  		--rc geninfo_unexecuted_blocks=1
00:05:05.176  		
00:05:05.176  		'
00:05:05.176    10:02:00 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:05:05.176  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:05.176  		--rc genhtml_branch_coverage=1
00:05:05.176  		--rc genhtml_function_coverage=1
00:05:05.176  		--rc genhtml_legend=1
00:05:05.176  		--rc geninfo_all_blocks=1
00:05:05.176  		--rc geninfo_unexecuted_blocks=1
00:05:05.176  		
00:05:05.176  		'
00:05:05.176   10:02:00 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/memory/memory_ut
00:05:05.176   10:02:00 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:05.176   10:02:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:05.176   10:02:00 env -- common/autotest_common.sh@10 -- # set +x
00:05:05.176  ************************************
00:05:05.176  START TEST env_memory
00:05:05.176  ************************************
00:05:05.176   10:02:00 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/memory/memory_ut
00:05:05.176  
00:05:05.176  
00:05:05.176       CUnit - A unit testing framework for C - Version 2.1-3
00:05:05.176       http://cunit.sourceforge.net/
00:05:05.176  
00:05:05.176  
00:05:05.176  Suite: memory
00:05:05.176    Test: alloc and free memory map ...[2024-11-20 10:02:00.212146] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed
00:05:05.176  passed
00:05:05.176    Test: mem map translation ...[2024-11-20 10:02:00.254554] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234
00:05:05.176  [2024-11-20 10:02:00.254606] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152
00:05:05.176  [2024-11-20 10:02:00.254691] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656
00:05:05.176  [2024-11-20 10:02:00.254720] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map
00:05:05.436  passed
00:05:05.436    Test: mem map registration ...[2024-11-20 10:02:00.325066] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234
00:05:05.436  [2024-11-20 10:02:00.325125] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152
00:05:05.436  passed
00:05:05.436    Test: mem map adjacent registrations ...passed
00:05:05.436  
00:05:05.436  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:05:05.436                suites      1      1    n/a      0        0
00:05:05.436                 tests      4      4      4      0        0
00:05:05.436               asserts    152    152    152      0      n/a
00:05:05.436  
00:05:05.436  Elapsed time =    0.242 seconds
00:05:05.436  
00:05:05.436  real	0m0.264s
00:05:05.436  user	0m0.243s
00:05:05.436  sys	0m0.021s
00:05:05.436   10:02:00 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:05.436   10:02:00 env.env_memory -- common/autotest_common.sh@10 -- # set +x
00:05:05.436  ************************************
00:05:05.436  END TEST env_memory
00:05:05.436  ************************************
00:05:05.436   10:02:00 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/vtophys/vtophys
00:05:05.436   10:02:00 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:05.436   10:02:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:05.436   10:02:00 env -- common/autotest_common.sh@10 -- # set +x
00:05:05.436  ************************************
00:05:05.436  START TEST env_vtophys
00:05:05.436  ************************************
00:05:05.436   10:02:00 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/vtophys/vtophys
00:05:05.436  EAL: lib.eal log level changed from notice to debug
00:05:05.436  EAL: Detected lcore 0 as core 0 on socket 0
00:05:05.436  EAL: Detected lcore 1 as core 1 on socket 0
00:05:05.436  EAL: Detected lcore 2 as core 2 on socket 0
00:05:05.436  EAL: Detected lcore 3 as core 3 on socket 0
00:05:05.436  EAL: Detected lcore 4 as core 4 on socket 0
00:05:05.436  EAL: Detected lcore 5 as core 5 on socket 0
00:05:05.436  EAL: Detected lcore 6 as core 8 on socket 0
00:05:05.436  EAL: Detected lcore 7 as core 9 on socket 0
00:05:05.436  EAL: Detected lcore 8 as core 10 on socket 0
00:05:05.436  EAL: Detected lcore 9 as core 11 on socket 0
00:05:05.436  EAL: Detected lcore 10 as core 12 on socket 0
00:05:05.436  EAL: Detected lcore 11 as core 13 on socket 0
00:05:05.436  EAL: Detected lcore 12 as core 0 on socket 1
00:05:05.436  EAL: Detected lcore 13 as core 1 on socket 1
00:05:05.436  EAL: Detected lcore 14 as core 2 on socket 1
00:05:05.436  EAL: Detected lcore 15 as core 3 on socket 1
00:05:05.436  EAL: Detected lcore 16 as core 4 on socket 1
00:05:05.436  EAL: Detected lcore 17 as core 5 on socket 1
00:05:05.436  EAL: Detected lcore 18 as core 8 on socket 1
00:05:05.436  EAL: Detected lcore 19 as core 9 on socket 1
00:05:05.436  EAL: Detected lcore 20 as core 10 on socket 1
00:05:05.436  EAL: Detected lcore 21 as core 11 on socket 1
00:05:05.436  EAL: Detected lcore 22 as core 12 on socket 1
00:05:05.436  EAL: Detected lcore 23 as core 13 on socket 1
00:05:05.436  EAL: Detected lcore 24 as core 0 on socket 0
00:05:05.436  EAL: Detected lcore 25 as core 1 on socket 0
00:05:05.436  EAL: Detected lcore 26 as core 2 on socket 0
00:05:05.436  EAL: Detected lcore 27 as core 3 on socket 0
00:05:05.436  EAL: Detected lcore 28 as core 4 on socket 0
00:05:05.436  EAL: Detected lcore 29 as core 5 on socket 0
00:05:05.436  EAL: Detected lcore 30 as core 8 on socket 0
00:05:05.436  EAL: Detected lcore 31 as core 9 on socket 0
00:05:05.436  EAL: Detected lcore 32 as core 10 on socket 0
00:05:05.436  EAL: Detected lcore 33 as core 11 on socket 0
00:05:05.436  EAL: Detected lcore 34 as core 12 on socket 0
00:05:05.436  EAL: Detected lcore 35 as core 13 on socket 0
00:05:05.436  EAL: Detected lcore 36 as core 0 on socket 1
00:05:05.436  EAL: Detected lcore 37 as core 1 on socket 1
00:05:05.436  EAL: Detected lcore 38 as core 2 on socket 1
00:05:05.436  EAL: Detected lcore 39 as core 3 on socket 1
00:05:05.436  EAL: Detected lcore 40 as core 4 on socket 1
00:05:05.436  EAL: Detected lcore 41 as core 5 on socket 1
00:05:05.436  EAL: Detected lcore 42 as core 8 on socket 1
00:05:05.436  EAL: Detected lcore 43 as core 9 on socket 1
00:05:05.436  EAL: Detected lcore 44 as core 10 on socket 1
00:05:05.436  EAL: Detected lcore 45 as core 11 on socket 1
00:05:05.436  EAL: Detected lcore 46 as core 12 on socket 1
00:05:05.436  EAL: Detected lcore 47 as core 13 on socket 1
00:05:05.436  EAL: Maximum logical cores by configuration: 128
00:05:05.436  EAL: Detected CPU lcores: 48
00:05:05.436  EAL: Detected NUMA nodes: 2
00:05:05.436  EAL: Checking presence of .so 'librte_eal.so.24.1'
00:05:05.436  EAL: Detected shared linkage of DPDK
00:05:05.436  EAL: No shared files mode enabled, IPC will be disabled
00:05:05.695  EAL: No shared files mode enabled, IPC is disabled
00:05:05.695  EAL: Bus pci wants IOVA as 'DC'
00:05:05.695  EAL: Bus auxiliary wants IOVA as 'DC'
00:05:05.695  EAL: Bus vdev wants IOVA as 'DC'
00:05:05.695  EAL: Buses did not request a specific IOVA mode.
00:05:05.695  EAL: IOMMU is available, selecting IOVA as VA mode.
00:05:05.695  EAL: Selected IOVA mode 'VA'
00:05:05.695  EAL: Probing VFIO support...
00:05:05.695  EAL: IOMMU type 1 (Type 1) is supported
00:05:05.695  EAL: IOMMU type 7 (sPAPR) is not supported
00:05:05.695  EAL: IOMMU type 8 (No-IOMMU) is not supported
00:05:05.695  EAL: VFIO support initialized
00:05:05.695  EAL: Ask a virtual area of 0x2e000 bytes
00:05:05.695  EAL: Virtual area found at 0x200000000000 (size = 0x2e000)
00:05:05.695  EAL: Setting up physically contiguous memory...
00:05:05.695  EAL: Setting maximum number of open files to 524288
00:05:05.695  EAL: Detected memory type: socket_id:0 hugepage_sz:2097152
00:05:05.695  EAL: Detected memory type: socket_id:1 hugepage_sz:2097152
00:05:05.695  EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152
00:05:05.695  EAL: Ask a virtual area of 0x61000 bytes
00:05:05.695  EAL: Virtual area found at 0x20000002e000 (size = 0x61000)
00:05:05.695  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:05:05.695  EAL: Ask a virtual area of 0x400000000 bytes
00:05:05.695  EAL: Virtual area found at 0x200000200000 (size = 0x400000000)
00:05:05.695  EAL: VA reserved for memseg list at 0x200000200000, size 400000000
00:05:05.695  EAL: Ask a virtual area of 0x61000 bytes
00:05:05.695  EAL: Virtual area found at 0x200400200000 (size = 0x61000)
00:05:05.695  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:05:05.695  EAL: Ask a virtual area of 0x400000000 bytes
00:05:05.695  EAL: Virtual area found at 0x200400400000 (size = 0x400000000)
00:05:05.695  EAL: VA reserved for memseg list at 0x200400400000, size 400000000
00:05:05.695  EAL: Ask a virtual area of 0x61000 bytes
00:05:05.695  EAL: Virtual area found at 0x200800400000 (size = 0x61000)
00:05:05.695  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:05:05.695  EAL: Ask a virtual area of 0x400000000 bytes
00:05:05.695  EAL: Virtual area found at 0x200800600000 (size = 0x400000000)
00:05:05.696  EAL: VA reserved for memseg list at 0x200800600000, size 400000000
00:05:05.696  EAL: Ask a virtual area of 0x61000 bytes
00:05:05.696  EAL: Virtual area found at 0x200c00600000 (size = 0x61000)
00:05:05.696  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:05:05.696  EAL: Ask a virtual area of 0x400000000 bytes
00:05:05.696  EAL: Virtual area found at 0x200c00800000 (size = 0x400000000)
00:05:05.696  EAL: VA reserved for memseg list at 0x200c00800000, size 400000000
00:05:05.696  EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152
00:05:05.696  EAL: Ask a virtual area of 0x61000 bytes
00:05:05.696  EAL: Virtual area found at 0x201000800000 (size = 0x61000)
00:05:05.696  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:05:05.696  EAL: Ask a virtual area of 0x400000000 bytes
00:05:05.696  EAL: Virtual area found at 0x201000a00000 (size = 0x400000000)
00:05:05.696  EAL: VA reserved for memseg list at 0x201000a00000, size 400000000
00:05:05.696  EAL: Ask a virtual area of 0x61000 bytes
00:05:05.696  EAL: Virtual area found at 0x201400a00000 (size = 0x61000)
00:05:05.696  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:05:05.696  EAL: Ask a virtual area of 0x400000000 bytes
00:05:05.696  EAL: Virtual area found at 0x201400c00000 (size = 0x400000000)
00:05:05.696  EAL: VA reserved for memseg list at 0x201400c00000, size 400000000
00:05:05.696  EAL: Ask a virtual area of 0x61000 bytes
00:05:05.696  EAL: Virtual area found at 0x201800c00000 (size = 0x61000)
00:05:05.696  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:05:05.696  EAL: Ask a virtual area of 0x400000000 bytes
00:05:05.696  EAL: Virtual area found at 0x201800e00000 (size = 0x400000000)
00:05:05.696  EAL: VA reserved for memseg list at 0x201800e00000, size 400000000
00:05:05.696  EAL: Ask a virtual area of 0x61000 bytes
00:05:05.696  EAL: Virtual area found at 0x201c00e00000 (size = 0x61000)
00:05:05.696  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:05:05.696  EAL: Ask a virtual area of 0x400000000 bytes
00:05:05.696  EAL: Virtual area found at 0x201c01000000 (size = 0x400000000)
00:05:05.696  EAL: VA reserved for memseg list at 0x201c01000000, size 400000000
00:05:05.696  EAL: Hugepages will be freed exactly as allocated.
00:05:05.696  EAL: No shared files mode enabled, IPC is disabled
00:05:05.696  EAL: No shared files mode enabled, IPC is disabled
00:05:05.696  EAL: TSC frequency is ~2700000 KHz
00:05:05.696  EAL: Main lcore 0 is ready (tid=7f63f03beb40;cpuset=[0])
00:05:05.696  EAL: Trying to obtain current memory policy.
00:05:05.696  EAL: Setting policy MPOL_PREFERRED for socket 0
00:05:05.696  EAL: Restoring previous memory policy: 0
00:05:05.696  EAL: request: mp_malloc_sync
00:05:05.696  EAL: No shared files mode enabled, IPC is disabled
00:05:05.696  EAL: Heap on socket 0 was expanded by 2MB
00:05:05.696  EAL: No shared files mode enabled, IPC is disabled
00:05:05.696  EAL: No shared files mode enabled, IPC is disabled
00:05:05.696  EAL: No PCI address specified using 'addr=<id>' in: bus=pci
00:05:05.696  EAL: Mem event callback 'spdk:(nil)' registered
00:05:05.696  
00:05:05.696  
00:05:05.696       CUnit - A unit testing framework for C - Version 2.1-3
00:05:05.696       http://cunit.sourceforge.net/
00:05:05.696  
00:05:05.696  
00:05:05.696  Suite: components_suite
00:05:05.954    Test: vtophys_malloc_test ...passed
00:05:05.954    Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy.
00:05:05.954  EAL: Setting policy MPOL_PREFERRED for socket 0
00:05:05.954  EAL: Restoring previous memory policy: 4
00:05:05.954  EAL: Calling mem event callback 'spdk:(nil)'
00:05:05.954  EAL: request: mp_malloc_sync
00:05:05.954  EAL: No shared files mode enabled, IPC is disabled
00:05:05.954  EAL: Heap on socket 0 was expanded by 4MB
00:05:05.954  EAL: Calling mem event callback 'spdk:(nil)'
00:05:05.954  EAL: request: mp_malloc_sync
00:05:05.954  EAL: No shared files mode enabled, IPC is disabled
00:05:05.954  EAL: Heap on socket 0 was shrunk by 4MB
00:05:05.954  EAL: Trying to obtain current memory policy.
00:05:05.954  EAL: Setting policy MPOL_PREFERRED for socket 0
00:05:05.954  EAL: Restoring previous memory policy: 4
00:05:05.954  EAL: Calling mem event callback 'spdk:(nil)'
00:05:05.954  EAL: request: mp_malloc_sync
00:05:05.954  EAL: No shared files mode enabled, IPC is disabled
00:05:05.954  EAL: Heap on socket 0 was expanded by 6MB
00:05:05.954  EAL: Calling mem event callback 'spdk:(nil)'
00:05:05.954  EAL: request: mp_malloc_sync
00:05:05.954  EAL: No shared files mode enabled, IPC is disabled
00:05:05.954  EAL: Heap on socket 0 was shrunk by 6MB
00:05:05.954  EAL: Trying to obtain current memory policy.
00:05:05.954  EAL: Setting policy MPOL_PREFERRED for socket 0
00:05:05.954  EAL: Restoring previous memory policy: 4
00:05:05.954  EAL: Calling mem event callback 'spdk:(nil)'
00:05:05.954  EAL: request: mp_malloc_sync
00:05:05.954  EAL: No shared files mode enabled, IPC is disabled
00:05:05.954  EAL: Heap on socket 0 was expanded by 10MB
00:05:05.954  EAL: Calling mem event callback 'spdk:(nil)'
00:05:05.954  EAL: request: mp_malloc_sync
00:05:05.954  EAL: No shared files mode enabled, IPC is disabled
00:05:05.954  EAL: Heap on socket 0 was shrunk by 10MB
00:05:05.954  EAL: Trying to obtain current memory policy.
00:05:05.954  EAL: Setting policy MPOL_PREFERRED for socket 0
00:05:05.954  EAL: Restoring previous memory policy: 4
00:05:05.954  EAL: Calling mem event callback 'spdk:(nil)'
00:05:05.954  EAL: request: mp_malloc_sync
00:05:05.954  EAL: No shared files mode enabled, IPC is disabled
00:05:05.954  EAL: Heap on socket 0 was expanded by 18MB
00:05:06.212  EAL: Calling mem event callback 'spdk:(nil)'
00:05:06.212  EAL: request: mp_malloc_sync
00:05:06.212  EAL: No shared files mode enabled, IPC is disabled
00:05:06.212  EAL: Heap on socket 0 was shrunk by 18MB
00:05:06.212  EAL: Trying to obtain current memory policy.
00:05:06.212  EAL: Setting policy MPOL_PREFERRED for socket 0
00:05:06.212  EAL: Restoring previous memory policy: 4
00:05:06.212  EAL: Calling mem event callback 'spdk:(nil)'
00:05:06.212  EAL: request: mp_malloc_sync
00:05:06.212  EAL: No shared files mode enabled, IPC is disabled
00:05:06.212  EAL: Heap on socket 0 was expanded by 34MB
00:05:06.212  EAL: Calling mem event callback 'spdk:(nil)'
00:05:06.212  EAL: request: mp_malloc_sync
00:05:06.212  EAL: No shared files mode enabled, IPC is disabled
00:05:06.212  EAL: Heap on socket 0 was shrunk by 34MB
00:05:06.212  EAL: Trying to obtain current memory policy.
00:05:06.212  EAL: Setting policy MPOL_PREFERRED for socket 0
00:05:06.212  EAL: Restoring previous memory policy: 4
00:05:06.212  EAL: Calling mem event callback 'spdk:(nil)'
00:05:06.212  EAL: request: mp_malloc_sync
00:05:06.212  EAL: No shared files mode enabled, IPC is disabled
00:05:06.212  EAL: Heap on socket 0 was expanded by 66MB
00:05:06.212  EAL: Calling mem event callback 'spdk:(nil)'
00:05:06.212  EAL: request: mp_malloc_sync
00:05:06.212  EAL: No shared files mode enabled, IPC is disabled
00:05:06.212  EAL: Heap on socket 0 was shrunk by 66MB
00:05:06.470  EAL: Trying to obtain current memory policy.
00:05:06.470  EAL: Setting policy MPOL_PREFERRED for socket 0
00:05:06.470  EAL: Restoring previous memory policy: 4
00:05:06.470  EAL: Calling mem event callback 'spdk:(nil)'
00:05:06.470  EAL: request: mp_malloc_sync
00:05:06.470  EAL: No shared files mode enabled, IPC is disabled
00:05:06.470  EAL: Heap on socket 0 was expanded by 130MB
00:05:06.728  EAL: Calling mem event callback 'spdk:(nil)'
00:05:06.728  EAL: request: mp_malloc_sync
00:05:06.728  EAL: No shared files mode enabled, IPC is disabled
00:05:06.728  EAL: Heap on socket 0 was shrunk by 130MB
00:05:06.728  EAL: Trying to obtain current memory policy.
00:05:06.728  EAL: Setting policy MPOL_PREFERRED for socket 0
00:05:06.986  EAL: Restoring previous memory policy: 4
00:05:06.986  EAL: Calling mem event callback 'spdk:(nil)'
00:05:06.986  EAL: request: mp_malloc_sync
00:05:06.986  EAL: No shared files mode enabled, IPC is disabled
00:05:06.986  EAL: Heap on socket 0 was expanded by 258MB
00:05:07.244  EAL: Calling mem event callback 'spdk:(nil)'
00:05:07.244  EAL: request: mp_malloc_sync
00:05:07.244  EAL: No shared files mode enabled, IPC is disabled
00:05:07.244  EAL: Heap on socket 0 was shrunk by 258MB
00:05:07.810  EAL: Trying to obtain current memory policy.
00:05:07.810  EAL: Setting policy MPOL_PREFERRED for socket 0
00:05:07.810  EAL: Restoring previous memory policy: 4
00:05:07.810  EAL: Calling mem event callback 'spdk:(nil)'
00:05:07.810  EAL: request: mp_malloc_sync
00:05:07.810  EAL: No shared files mode enabled, IPC is disabled
00:05:07.810  EAL: Heap on socket 0 was expanded by 514MB
00:05:08.742  EAL: Calling mem event callback 'spdk:(nil)'
00:05:08.742  EAL: request: mp_malloc_sync
00:05:08.742  EAL: No shared files mode enabled, IPC is disabled
00:05:08.742  EAL: Heap on socket 0 was shrunk by 514MB
00:05:09.309  EAL: Trying to obtain current memory policy.
00:05:09.309  EAL: Setting policy MPOL_PREFERRED for socket 0
00:05:09.567  EAL: Restoring previous memory policy: 4
00:05:09.567  EAL: Calling mem event callback 'spdk:(nil)'
00:05:09.567  EAL: request: mp_malloc_sync
00:05:09.567  EAL: No shared files mode enabled, IPC is disabled
00:05:09.567  EAL: Heap on socket 0 was expanded by 1026MB
00:05:11.467  EAL: Calling mem event callback 'spdk:(nil)'
00:05:11.467  EAL: request: mp_malloc_sync
00:05:11.467  EAL: No shared files mode enabled, IPC is disabled
00:05:11.467  EAL: Heap on socket 0 was shrunk by 1026MB
00:05:12.842  passed
00:05:12.842  
00:05:12.842  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:05:12.842                suites      1      1    n/a      0        0
00:05:12.842                 tests      2      2      2      0        0
00:05:12.842               asserts    497    497    497      0      n/a
00:05:12.842  
00:05:12.842  Elapsed time =    6.830 seconds
00:05:12.842  EAL: Calling mem event callback 'spdk:(nil)'
00:05:12.842  EAL: request: mp_malloc_sync
00:05:12.842  EAL: No shared files mode enabled, IPC is disabled
00:05:12.842  EAL: Heap on socket 0 was shrunk by 2MB
00:05:12.842  EAL: No shared files mode enabled, IPC is disabled
00:05:12.842  EAL: No shared files mode enabled, IPC is disabled
00:05:12.842  EAL: No shared files mode enabled, IPC is disabled
00:05:12.842  
00:05:12.842  real	0m7.100s
00:05:12.842  user	0m6.051s
00:05:12.842  sys	0m0.993s
00:05:12.842   10:02:07 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:12.842   10:02:07 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x
00:05:12.842  ************************************
00:05:12.842  END TEST env_vtophys
00:05:12.842  ************************************
00:05:12.842   10:02:07 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/pci/pci_ut
00:05:12.842   10:02:07 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:12.842   10:02:07 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:12.842   10:02:07 env -- common/autotest_common.sh@10 -- # set +x
00:05:12.842  ************************************
00:05:12.842  START TEST env_pci
00:05:12.842  ************************************
00:05:12.842   10:02:07 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/pci/pci_ut
00:05:12.842  
00:05:12.842  
00:05:12.842       CUnit - A unit testing framework for C - Version 2.1-3
00:05:12.842       http://cunit.sourceforge.net/
00:05:12.842  
00:05:12.842  
00:05:12.842  Suite: pci
00:05:12.842    Test: pci_hook ...[2024-11-20 10:02:07.662015] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1734855 has claimed it
00:05:12.842  EAL: Cannot find device (10000:00:01.0)
00:05:12.842  EAL: Failed to attach device on primary process
00:05:12.842  passed
00:05:12.842  
00:05:12.842  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:05:12.842                suites      1      1    n/a      0        0
00:05:12.842                 tests      1      1      1      0        0
00:05:12.842               asserts     25     25     25      0      n/a
00:05:12.842  
00:05:12.842  Elapsed time =    0.044 seconds
00:05:12.842  
00:05:12.842  real	0m0.103s
00:05:12.842  user	0m0.044s
00:05:12.842  sys	0m0.058s
00:05:12.842   10:02:07 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:12.842   10:02:07 env.env_pci -- common/autotest_common.sh@10 -- # set +x
00:05:12.842  ************************************
00:05:12.842  END TEST env_pci
00:05:12.842  ************************************
00:05:12.842   10:02:07 env -- env/env.sh@14 -- # argv='-c 0x1 '
00:05:12.842    10:02:07 env -- env/env.sh@15 -- # uname
00:05:12.842   10:02:07 env -- env/env.sh@15 -- # '[' Linux = Linux ']'
00:05:12.842   10:02:07 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000
00:05:12.842   10:02:07 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:05:12.842   10:02:07 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:05:12.842   10:02:07 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:12.842   10:02:07 env -- common/autotest_common.sh@10 -- # set +x
00:05:12.842  ************************************
00:05:12.842  START TEST env_dpdk_post_init
00:05:12.842  ************************************
00:05:12.842   10:02:07 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:05:12.842  EAL: Detected CPU lcores: 48
00:05:12.842  EAL: Detected NUMA nodes: 2
00:05:12.842  EAL: Detected shared linkage of DPDK
00:05:12.842  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:05:12.842  EAL: Selected IOVA mode 'VA'
00:05:12.842  EAL: VFIO support initialized
00:05:12.842  TELEMETRY: No legacy callbacks, legacy socket not created
00:05:13.102  EAL: Using IOMMU type 1 (Type 1)
00:05:13.102  EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0)
00:05:13.102  EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0)
00:05:13.102  EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0)
00:05:13.102  EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0)
00:05:13.102  EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0)
00:05:13.102  EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0)
00:05:13.102  EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0)
00:05:13.102  EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0)
00:05:13.102  EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1)
00:05:13.102  EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1)
00:05:13.102  EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1)
00:05:13.102  EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1)
00:05:13.102  EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1)
00:05:13.102  EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1)
00:05:13.102  EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1)
00:05:13.102  EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1)
00:05:14.041  EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:85:00.0 (socket 1)
00:05:17.322  EAL: Releasing PCI mapped resource for 0000:85:00.0
00:05:17.322  EAL: Calling pci_unmap_resource for 0000:85:00.0 at 0x202001040000
00:05:17.322  Starting DPDK initialization...
00:05:17.322  Starting SPDK post initialization...
00:05:17.322  SPDK NVMe probe
00:05:17.322  Attaching to 0000:85:00.0
00:05:17.322  Attached to 0000:85:00.0
00:05:17.322  Cleaning up...
00:05:17.322  
00:05:17.322  real	0m4.550s
00:05:17.322  user	0m3.098s
00:05:17.322  sys	0m0.508s
00:05:17.322   10:02:12 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:17.322   10:02:12 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x
00:05:17.322  ************************************
00:05:17.322  END TEST env_dpdk_post_init
00:05:17.322  ************************************
00:05:17.322    10:02:12 env -- env/env.sh@26 -- # uname
00:05:17.322   10:02:12 env -- env/env.sh@26 -- # '[' Linux = Linux ']'
00:05:17.322   10:02:12 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks
00:05:17.322   10:02:12 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:17.322   10:02:12 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:17.322   10:02:12 env -- common/autotest_common.sh@10 -- # set +x
00:05:17.322  ************************************
00:05:17.322  START TEST env_mem_callbacks
00:05:17.322  ************************************
00:05:17.322   10:02:12 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks
00:05:17.322  EAL: Detected CPU lcores: 48
00:05:17.322  EAL: Detected NUMA nodes: 2
00:05:17.322  EAL: Detected shared linkage of DPDK
00:05:17.580  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:05:17.580  EAL: Selected IOVA mode 'VA'
00:05:17.580  EAL: VFIO support initialized
00:05:17.580  TELEMETRY: No legacy callbacks, legacy socket not created
00:05:17.580  
00:05:17.580  
00:05:17.580       CUnit - A unit testing framework for C - Version 2.1-3
00:05:17.580       http://cunit.sourceforge.net/
00:05:17.580  
00:05:17.580  
00:05:17.580  Suite: memory
00:05:17.580    Test: test ...
00:05:17.580  register 0x200000200000 2097152
00:05:17.580  malloc 3145728
00:05:17.580  register 0x200000400000 4194304
00:05:17.580  buf 0x2000004fffc0 len 3145728 PASSED
00:05:17.580  malloc 64
00:05:17.580  buf 0x2000004ffec0 len 64 PASSED
00:05:17.580  malloc 4194304
00:05:17.580  register 0x200000800000 6291456
00:05:17.580  buf 0x2000009fffc0 len 4194304 PASSED
00:05:17.580  free 0x2000004fffc0 3145728
00:05:17.580  free 0x2000004ffec0 64
00:05:17.580  unregister 0x200000400000 4194304 PASSED
00:05:17.580  free 0x2000009fffc0 4194304
00:05:17.580  unregister 0x200000800000 6291456 PASSED
00:05:17.580  malloc 8388608
00:05:17.580  register 0x200000400000 10485760
00:05:17.580  buf 0x2000005fffc0 len 8388608 PASSED
00:05:17.580  free 0x2000005fffc0 8388608
00:05:17.580  unregister 0x200000400000 10485760 PASSED
00:05:17.580  passed
00:05:17.580  
00:05:17.580  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:05:17.580                suites      1      1    n/a      0        0
00:05:17.580                 tests      1      1      1      0        0
00:05:17.580               asserts     15     15     15      0      n/a
00:05:17.580  
00:05:17.580  Elapsed time =    0.049 seconds
00:05:17.580  
00:05:17.580  real	0m0.172s
00:05:17.580  user	0m0.099s
00:05:17.580  sys	0m0.072s
00:05:17.580   10:02:12 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:17.580   10:02:12 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x
00:05:17.580  ************************************
00:05:17.580  END TEST env_mem_callbacks
00:05:17.580  ************************************
00:05:17.580  
00:05:17.580  real	0m12.597s
00:05:17.580  user	0m9.742s
00:05:17.580  sys	0m1.875s
00:05:17.580   10:02:12 env -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:17.580   10:02:12 env -- common/autotest_common.sh@10 -- # set +x
00:05:17.580  ************************************
00:05:17.580  END TEST env
00:05:17.580  ************************************
00:05:17.580   10:02:12  -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/rpc.sh
00:05:17.580   10:02:12  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:17.580   10:02:12  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:17.580   10:02:12  -- common/autotest_common.sh@10 -- # set +x
00:05:17.580  ************************************
00:05:17.580  START TEST rpc
00:05:17.580  ************************************
00:05:17.580   10:02:12 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/rpc.sh
00:05:17.580  * Looking for test storage...
00:05:17.580  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc
00:05:17.580    10:02:12 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:05:17.580     10:02:12 rpc -- common/autotest_common.sh@1693 -- # lcov --version
00:05:17.580     10:02:12 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:05:17.839    10:02:12 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:05:17.839    10:02:12 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:17.839    10:02:12 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:17.839    10:02:12 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:17.839    10:02:12 rpc -- scripts/common.sh@336 -- # IFS=.-:
00:05:17.839    10:02:12 rpc -- scripts/common.sh@336 -- # read -ra ver1
00:05:17.839    10:02:12 rpc -- scripts/common.sh@337 -- # IFS=.-:
00:05:17.839    10:02:12 rpc -- scripts/common.sh@337 -- # read -ra ver2
00:05:17.839    10:02:12 rpc -- scripts/common.sh@338 -- # local 'op=<'
00:05:17.839    10:02:12 rpc -- scripts/common.sh@340 -- # ver1_l=2
00:05:17.839    10:02:12 rpc -- scripts/common.sh@341 -- # ver2_l=1
00:05:17.839    10:02:12 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:17.839    10:02:12 rpc -- scripts/common.sh@344 -- # case "$op" in
00:05:17.839    10:02:12 rpc -- scripts/common.sh@345 -- # : 1
00:05:17.839    10:02:12 rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:17.839    10:02:12 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:17.839     10:02:12 rpc -- scripts/common.sh@365 -- # decimal 1
00:05:17.839     10:02:12 rpc -- scripts/common.sh@353 -- # local d=1
00:05:17.839     10:02:12 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:17.839     10:02:12 rpc -- scripts/common.sh@355 -- # echo 1
00:05:17.839    10:02:12 rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:05:17.839     10:02:12 rpc -- scripts/common.sh@366 -- # decimal 2
00:05:17.839     10:02:12 rpc -- scripts/common.sh@353 -- # local d=2
00:05:17.839     10:02:12 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:17.839     10:02:12 rpc -- scripts/common.sh@355 -- # echo 2
00:05:17.839    10:02:12 rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:05:17.839    10:02:12 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:17.839    10:02:12 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:17.839    10:02:12 rpc -- scripts/common.sh@368 -- # return 0
00:05:17.839    10:02:12 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:17.839    10:02:12 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:05:17.839  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:17.839  		--rc genhtml_branch_coverage=1
00:05:17.839  		--rc genhtml_function_coverage=1
00:05:17.839  		--rc genhtml_legend=1
00:05:17.839  		--rc geninfo_all_blocks=1
00:05:17.839  		--rc geninfo_unexecuted_blocks=1
00:05:17.839  		
00:05:17.839  		'
00:05:17.839    10:02:12 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:05:17.839  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:17.839  		--rc genhtml_branch_coverage=1
00:05:17.839  		--rc genhtml_function_coverage=1
00:05:17.839  		--rc genhtml_legend=1
00:05:17.839  		--rc geninfo_all_blocks=1
00:05:17.839  		--rc geninfo_unexecuted_blocks=1
00:05:17.839  		
00:05:17.839  		'
00:05:17.839    10:02:12 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:05:17.839  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:17.839  		--rc genhtml_branch_coverage=1
00:05:17.839  		--rc genhtml_function_coverage=1
00:05:17.839  		--rc genhtml_legend=1
00:05:17.839  		--rc geninfo_all_blocks=1
00:05:17.839  		--rc geninfo_unexecuted_blocks=1
00:05:17.839  		
00:05:17.839  		'
00:05:17.839    10:02:12 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:05:17.839  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:17.839  		--rc genhtml_branch_coverage=1
00:05:17.839  		--rc genhtml_function_coverage=1
00:05:17.839  		--rc genhtml_legend=1
00:05:17.839  		--rc geninfo_all_blocks=1
00:05:17.839  		--rc geninfo_unexecuted_blocks=1
00:05:17.839  		
00:05:17.839  		'
00:05:17.839   10:02:12 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1735645
00:05:17.839   10:02:12 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -e bdev
00:05:17.839   10:02:12 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:05:17.839   10:02:12 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1735645
00:05:17.839   10:02:12 rpc -- common/autotest_common.sh@835 -- # '[' -z 1735645 ']'
00:05:17.839   10:02:12 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:17.839   10:02:12 rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:17.839   10:02:12 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:17.839  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:17.839   10:02:12 rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:17.839   10:02:12 rpc -- common/autotest_common.sh@10 -- # set +x
00:05:17.839  [2024-11-20 10:02:12.891987] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:05:17.839  [2024-11-20 10:02:12.892120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1735645 ]
00:05:18.097  [2024-11-20 10:02:13.022708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:18.097  [2024-11-20 10:02:13.137992] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified.
00:05:18.097  [2024-11-20 10:02:13.138059] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1735645' to capture a snapshot of events at runtime.
00:05:18.097  [2024-11-20 10:02:13.138097] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:05:18.097  [2024-11-20 10:02:13.138114] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:05:18.097  [2024-11-20 10:02:13.138131] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1735645 for offline analysis/debug.
00:05:18.097  [2024-11-20 10:02:13.139392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:19.030   10:02:13 rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:19.030   10:02:13 rpc -- common/autotest_common.sh@868 -- # return 0
00:05:19.030   10:02:13 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc
00:05:19.030   10:02:13 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc
00:05:19.030   10:02:13 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd
00:05:19.030   10:02:13 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity
00:05:19.030   10:02:13 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:19.030   10:02:13 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:19.030   10:02:13 rpc -- common/autotest_common.sh@10 -- # set +x
00:05:19.030  ************************************
00:05:19.030  START TEST rpc_integrity
00:05:19.030  ************************************
00:05:19.030   10:02:13 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity
00:05:19.030    10:02:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:05:19.030    10:02:13 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:19.030    10:02:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:05:19.030    10:02:13 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:19.030   10:02:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]'
00:05:19.030    10:02:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length
00:05:19.030   10:02:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:05:19.030    10:02:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:05:19.030    10:02:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:19.030    10:02:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:05:19.030    10:02:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:19.030   10:02:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0
00:05:19.030    10:02:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:05:19.030    10:02:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:19.030    10:02:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:05:19.030    10:02:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:19.030   10:02:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[
00:05:19.030  {
00:05:19.030  "name": "Malloc0",
00:05:19.030  "aliases": [
00:05:19.030  "90070472-855b-45e0-816d-8d5427fdf405"
00:05:19.030  ],
00:05:19.030  "product_name": "Malloc disk",
00:05:19.030  "block_size": 512,
00:05:19.030  "num_blocks": 16384,
00:05:19.030  "uuid": "90070472-855b-45e0-816d-8d5427fdf405",
00:05:19.030  "assigned_rate_limits": {
00:05:19.030  "rw_ios_per_sec": 0,
00:05:19.030  "rw_mbytes_per_sec": 0,
00:05:19.030  "r_mbytes_per_sec": 0,
00:05:19.030  "w_mbytes_per_sec": 0
00:05:19.030  },
00:05:19.030  "claimed": false,
00:05:19.030  "zoned": false,
00:05:19.030  "supported_io_types": {
00:05:19.030  "read": true,
00:05:19.030  "write": true,
00:05:19.030  "unmap": true,
00:05:19.030  "flush": true,
00:05:19.030  "reset": true,
00:05:19.030  "nvme_admin": false,
00:05:19.030  "nvme_io": false,
00:05:19.030  "nvme_io_md": false,
00:05:19.030  "write_zeroes": true,
00:05:19.030  "zcopy": true,
00:05:19.030  "get_zone_info": false,
00:05:19.030  "zone_management": false,
00:05:19.030  "zone_append": false,
00:05:19.031  "compare": false,
00:05:19.031  "compare_and_write": false,
00:05:19.031  "abort": true,
00:05:19.031  "seek_hole": false,
00:05:19.031  "seek_data": false,
00:05:19.031  "copy": true,
00:05:19.031  "nvme_iov_md": false
00:05:19.031  },
00:05:19.031  "memory_domains": [
00:05:19.031  {
00:05:19.031  "dma_device_id": "system",
00:05:19.031  "dma_device_type": 1
00:05:19.031  },
00:05:19.031  {
00:05:19.031  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:05:19.031  "dma_device_type": 2
00:05:19.031  }
00:05:19.031  ],
00:05:19.031  "driver_specific": {}
00:05:19.031  }
00:05:19.031  ]'
00:05:19.031    10:02:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length
00:05:19.031   10:02:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:05:19.031   10:02:14 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0
00:05:19.031   10:02:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:19.031   10:02:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:05:19.031  [2024-11-20 10:02:14.092886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0
00:05:19.031  [2024-11-20 10:02:14.092958] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:05:19.031  [2024-11-20 10:02:14.092998] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000022b80
00:05:19.031  [2024-11-20 10:02:14.093018] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:05:19.031  [2024-11-20 10:02:14.095380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:05:19.031  [2024-11-20 10:02:14.095410] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:05:19.031  Passthru0
00:05:19.031   10:02:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:19.031    10:02:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:05:19.031    10:02:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:19.031    10:02:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:05:19.031    10:02:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:19.031   10:02:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[
00:05:19.031  {
00:05:19.031  "name": "Malloc0",
00:05:19.031  "aliases": [
00:05:19.031  "90070472-855b-45e0-816d-8d5427fdf405"
00:05:19.031  ],
00:05:19.031  "product_name": "Malloc disk",
00:05:19.031  "block_size": 512,
00:05:19.031  "num_blocks": 16384,
00:05:19.031  "uuid": "90070472-855b-45e0-816d-8d5427fdf405",
00:05:19.031  "assigned_rate_limits": {
00:05:19.031  "rw_ios_per_sec": 0,
00:05:19.031  "rw_mbytes_per_sec": 0,
00:05:19.031  "r_mbytes_per_sec": 0,
00:05:19.031  "w_mbytes_per_sec": 0
00:05:19.031  },
00:05:19.031  "claimed": true,
00:05:19.031  "claim_type": "exclusive_write",
00:05:19.031  "zoned": false,
00:05:19.031  "supported_io_types": {
00:05:19.031  "read": true,
00:05:19.031  "write": true,
00:05:19.031  "unmap": true,
00:05:19.031  "flush": true,
00:05:19.031  "reset": true,
00:05:19.031  "nvme_admin": false,
00:05:19.031  "nvme_io": false,
00:05:19.031  "nvme_io_md": false,
00:05:19.031  "write_zeroes": true,
00:05:19.031  "zcopy": true,
00:05:19.031  "get_zone_info": false,
00:05:19.031  "zone_management": false,
00:05:19.031  "zone_append": false,
00:05:19.031  "compare": false,
00:05:19.031  "compare_and_write": false,
00:05:19.031  "abort": true,
00:05:19.031  "seek_hole": false,
00:05:19.031  "seek_data": false,
00:05:19.031  "copy": true,
00:05:19.031  "nvme_iov_md": false
00:05:19.031  },
00:05:19.031  "memory_domains": [
00:05:19.031  {
00:05:19.031  "dma_device_id": "system",
00:05:19.031  "dma_device_type": 1
00:05:19.031  },
00:05:19.031  {
00:05:19.031  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:05:19.031  "dma_device_type": 2
00:05:19.031  }
00:05:19.031  ],
00:05:19.031  "driver_specific": {}
00:05:19.031  },
00:05:19.031  {
00:05:19.031  "name": "Passthru0",
00:05:19.031  "aliases": [
00:05:19.031  "09aa7012-6c8a-5b0c-bb4d-a7ef03d8c0cc"
00:05:19.031  ],
00:05:19.031  "product_name": "passthru",
00:05:19.031  "block_size": 512,
00:05:19.031  "num_blocks": 16384,
00:05:19.031  "uuid": "09aa7012-6c8a-5b0c-bb4d-a7ef03d8c0cc",
00:05:19.031  "assigned_rate_limits": {
00:05:19.031  "rw_ios_per_sec": 0,
00:05:19.031  "rw_mbytes_per_sec": 0,
00:05:19.031  "r_mbytes_per_sec": 0,
00:05:19.031  "w_mbytes_per_sec": 0
00:05:19.031  },
00:05:19.031  "claimed": false,
00:05:19.031  "zoned": false,
00:05:19.031  "supported_io_types": {
00:05:19.031  "read": true,
00:05:19.031  "write": true,
00:05:19.031  "unmap": true,
00:05:19.031  "flush": true,
00:05:19.031  "reset": true,
00:05:19.031  "nvme_admin": false,
00:05:19.031  "nvme_io": false,
00:05:19.031  "nvme_io_md": false,
00:05:19.031  "write_zeroes": true,
00:05:19.031  "zcopy": true,
00:05:19.031  "get_zone_info": false,
00:05:19.031  "zone_management": false,
00:05:19.031  "zone_append": false,
00:05:19.031  "compare": false,
00:05:19.031  "compare_and_write": false,
00:05:19.031  "abort": true,
00:05:19.031  "seek_hole": false,
00:05:19.031  "seek_data": false,
00:05:19.031  "copy": true,
00:05:19.031  "nvme_iov_md": false
00:05:19.031  },
00:05:19.031  "memory_domains": [
00:05:19.031  {
00:05:19.031  "dma_device_id": "system",
00:05:19.031  "dma_device_type": 1
00:05:19.031  },
00:05:19.031  {
00:05:19.031  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:05:19.031  "dma_device_type": 2
00:05:19.031  }
00:05:19.031  ],
00:05:19.031  "driver_specific": {
00:05:19.031  "passthru": {
00:05:19.031  "name": "Passthru0",
00:05:19.031  "base_bdev_name": "Malloc0"
00:05:19.031  }
00:05:19.031  }
00:05:19.031  }
00:05:19.031  ]'
00:05:19.031    10:02:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length
00:05:19.031   10:02:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:05:19.031   10:02:14 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:05:19.031   10:02:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:19.031   10:02:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:05:19.289   10:02:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:19.289   10:02:14 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0
00:05:19.289   10:02:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:19.289   10:02:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:05:19.289   10:02:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:19.289    10:02:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:05:19.289    10:02:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:19.289    10:02:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:05:19.289    10:02:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:19.289   10:02:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]'
00:05:19.289    10:02:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length
00:05:19.289   10:02:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:05:19.289  
00:05:19.289  real	0m0.249s
00:05:19.289  user	0m0.142s
00:05:19.289  sys	0m0.018s
00:05:19.289   10:02:14 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:19.289   10:02:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:05:19.289  ************************************
00:05:19.289  END TEST rpc_integrity
00:05:19.289  ************************************
00:05:19.289   10:02:14 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins
00:05:19.289   10:02:14 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:19.289   10:02:14 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:19.289   10:02:14 rpc -- common/autotest_common.sh@10 -- # set +x
00:05:19.289  ************************************
00:05:19.289  START TEST rpc_plugins
00:05:19.289  ************************************
00:05:19.289   10:02:14 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins
00:05:19.289    10:02:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc
00:05:19.289    10:02:14 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:19.289    10:02:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:05:19.289    10:02:14 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:19.289   10:02:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1
00:05:19.289    10:02:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs
00:05:19.289    10:02:14 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:19.289    10:02:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:05:19.289    10:02:14 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:19.289   10:02:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[
00:05:19.289  {
00:05:19.289  "name": "Malloc1",
00:05:19.289  "aliases": [
00:05:19.289  "95a5675c-1692-43c2-b2cf-62c66aadde1d"
00:05:19.289  ],
00:05:19.289  "product_name": "Malloc disk",
00:05:19.289  "block_size": 4096,
00:05:19.289  "num_blocks": 256,
00:05:19.289  "uuid": "95a5675c-1692-43c2-b2cf-62c66aadde1d",
00:05:19.289  "assigned_rate_limits": {
00:05:19.289  "rw_ios_per_sec": 0,
00:05:19.289  "rw_mbytes_per_sec": 0,
00:05:19.289  "r_mbytes_per_sec": 0,
00:05:19.289  "w_mbytes_per_sec": 0
00:05:19.289  },
00:05:19.289  "claimed": false,
00:05:19.289  "zoned": false,
00:05:19.289  "supported_io_types": {
00:05:19.289  "read": true,
00:05:19.289  "write": true,
00:05:19.289  "unmap": true,
00:05:19.289  "flush": true,
00:05:19.289  "reset": true,
00:05:19.289  "nvme_admin": false,
00:05:19.289  "nvme_io": false,
00:05:19.289  "nvme_io_md": false,
00:05:19.289  "write_zeroes": true,
00:05:19.289  "zcopy": true,
00:05:19.289  "get_zone_info": false,
00:05:19.289  "zone_management": false,
00:05:19.289  "zone_append": false,
00:05:19.289  "compare": false,
00:05:19.289  "compare_and_write": false,
00:05:19.289  "abort": true,
00:05:19.289  "seek_hole": false,
00:05:19.289  "seek_data": false,
00:05:19.289  "copy": true,
00:05:19.289  "nvme_iov_md": false
00:05:19.289  },
00:05:19.289  "memory_domains": [
00:05:19.289  {
00:05:19.289  "dma_device_id": "system",
00:05:19.289  "dma_device_type": 1
00:05:19.289  },
00:05:19.289  {
00:05:19.290  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:05:19.290  "dma_device_type": 2
00:05:19.290  }
00:05:19.290  ],
00:05:19.290  "driver_specific": {}
00:05:19.290  }
00:05:19.290  ]'
00:05:19.290    10:02:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length
00:05:19.290   10:02:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']'
00:05:19.290   10:02:14 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1
00:05:19.290   10:02:14 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:19.290   10:02:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:05:19.290   10:02:14 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:19.290    10:02:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs
00:05:19.290    10:02:14 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:19.290    10:02:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:05:19.290    10:02:14 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:19.290   10:02:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]'
00:05:19.290    10:02:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length
00:05:19.290   10:02:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']'
00:05:19.290  
00:05:19.290  real	0m0.109s
00:05:19.290  user	0m0.068s
00:05:19.290  sys	0m0.009s
00:05:19.290   10:02:14 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:19.290   10:02:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:05:19.290  ************************************
00:05:19.290  END TEST rpc_plugins
00:05:19.290  ************************************
00:05:19.290   10:02:14 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test
00:05:19.290   10:02:14 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:19.290   10:02:14 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:19.290   10:02:14 rpc -- common/autotest_common.sh@10 -- # set +x
00:05:19.548  ************************************
00:05:19.548  START TEST rpc_trace_cmd_test
00:05:19.548  ************************************
00:05:19.548   10:02:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test
00:05:19.548   10:02:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info
00:05:19.548    10:02:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info
00:05:19.548    10:02:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:19.548    10:02:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x
00:05:19.548    10:02:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:19.548   10:02:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{
00:05:19.548  "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1735645",
00:05:19.548  "tpoint_group_mask": "0x8",
00:05:19.548  "iscsi_conn": {
00:05:19.548  "mask": "0x2",
00:05:19.548  "tpoint_mask": "0x0"
00:05:19.548  },
00:05:19.548  "scsi": {
00:05:19.548  "mask": "0x4",
00:05:19.548  "tpoint_mask": "0x0"
00:05:19.548  },
00:05:19.548  "bdev": {
00:05:19.548  "mask": "0x8",
00:05:19.548  "tpoint_mask": "0xffffffffffffffff"
00:05:19.548  },
00:05:19.548  "nvmf_rdma": {
00:05:19.548  "mask": "0x10",
00:05:19.548  "tpoint_mask": "0x0"
00:05:19.548  },
00:05:19.548  "nvmf_tcp": {
00:05:19.548  "mask": "0x20",
00:05:19.548  "tpoint_mask": "0x0"
00:05:19.548  },
00:05:19.548  "ftl": {
00:05:19.548  "mask": "0x40",
00:05:19.548  "tpoint_mask": "0x0"
00:05:19.548  },
00:05:19.548  "blobfs": {
00:05:19.548  "mask": "0x80",
00:05:19.548  "tpoint_mask": "0x0"
00:05:19.548  },
00:05:19.548  "dsa": {
00:05:19.548  "mask": "0x200",
00:05:19.548  "tpoint_mask": "0x0"
00:05:19.548  },
00:05:19.548  "thread": {
00:05:19.548  "mask": "0x400",
00:05:19.548  "tpoint_mask": "0x0"
00:05:19.548  },
00:05:19.548  "nvme_pcie": {
00:05:19.548  "mask": "0x800",
00:05:19.548  "tpoint_mask": "0x0"
00:05:19.548  },
00:05:19.548  "iaa": {
00:05:19.548  "mask": "0x1000",
00:05:19.548  "tpoint_mask": "0x0"
00:05:19.548  },
00:05:19.548  "nvme_tcp": {
00:05:19.548  "mask": "0x2000",
00:05:19.548  "tpoint_mask": "0x0"
00:05:19.548  },
00:05:19.548  "bdev_nvme": {
00:05:19.548  "mask": "0x4000",
00:05:19.548  "tpoint_mask": "0x0"
00:05:19.548  },
00:05:19.548  "sock": {
00:05:19.548  "mask": "0x8000",
00:05:19.548  "tpoint_mask": "0x0"
00:05:19.548  },
00:05:19.548  "blob": {
00:05:19.548  "mask": "0x10000",
00:05:19.548  "tpoint_mask": "0x0"
00:05:19.548  },
00:05:19.548  "bdev_raid": {
00:05:19.548  "mask": "0x20000",
00:05:19.548  "tpoint_mask": "0x0"
00:05:19.548  },
00:05:19.548  "scheduler": {
00:05:19.548  "mask": "0x40000",
00:05:19.548  "tpoint_mask": "0x0"
00:05:19.548  }
00:05:19.548  }'
00:05:19.548    10:02:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length
00:05:19.548   10:02:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']'
00:05:19.548    10:02:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")'
00:05:19.548   10:02:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']'
00:05:19.548    10:02:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")'
00:05:19.548   10:02:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']'
00:05:19.548    10:02:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")'
00:05:19.548   10:02:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']'
00:05:19.548    10:02:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask
00:05:19.548   10:02:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']'
00:05:19.548  
00:05:19.548  real	0m0.182s
00:05:19.548  user	0m0.166s
00:05:19.548  sys	0m0.010s
00:05:19.548   10:02:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:19.548   10:02:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x
00:05:19.548  ************************************
00:05:19.548  END TEST rpc_trace_cmd_test
00:05:19.548  ************************************
00:05:19.548   10:02:14 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]]
00:05:19.548   10:02:14 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd
00:05:19.548   10:02:14 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity
00:05:19.548   10:02:14 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:19.548   10:02:14 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:19.548   10:02:14 rpc -- common/autotest_common.sh@10 -- # set +x
00:05:19.548  ************************************
00:05:19.548  START TEST rpc_daemon_integrity
00:05:19.548  ************************************
00:05:19.548   10:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity
00:05:19.548    10:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:05:19.548    10:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:19.548    10:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:05:19.548    10:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:19.548   10:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]'
00:05:19.548    10:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length
00:05:19.807   10:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:05:19.807    10:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:05:19.807    10:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:19.807    10:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:05:19.807    10:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:19.807   10:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2
00:05:19.807    10:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:05:19.807    10:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:19.807    10:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:05:19.807    10:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:19.807   10:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[
00:05:19.807  {
00:05:19.807  "name": "Malloc2",
00:05:19.807  "aliases": [
00:05:19.807  "d2b9d6f6-5d10-4aa4-bc7b-d245d7c4c458"
00:05:19.807  ],
00:05:19.807  "product_name": "Malloc disk",
00:05:19.807  "block_size": 512,
00:05:19.807  "num_blocks": 16384,
00:05:19.807  "uuid": "d2b9d6f6-5d10-4aa4-bc7b-d245d7c4c458",
00:05:19.807  "assigned_rate_limits": {
00:05:19.807  "rw_ios_per_sec": 0,
00:05:19.807  "rw_mbytes_per_sec": 0,
00:05:19.807  "r_mbytes_per_sec": 0,
00:05:19.807  "w_mbytes_per_sec": 0
00:05:19.807  },
00:05:19.807  "claimed": false,
00:05:19.807  "zoned": false,
00:05:19.807  "supported_io_types": {
00:05:19.807  "read": true,
00:05:19.807  "write": true,
00:05:19.807  "unmap": true,
00:05:19.807  "flush": true,
00:05:19.807  "reset": true,
00:05:19.807  "nvme_admin": false,
00:05:19.807  "nvme_io": false,
00:05:19.807  "nvme_io_md": false,
00:05:19.807  "write_zeroes": true,
00:05:19.807  "zcopy": true,
00:05:19.807  "get_zone_info": false,
00:05:19.807  "zone_management": false,
00:05:19.807  "zone_append": false,
00:05:19.807  "compare": false,
00:05:19.807  "compare_and_write": false,
00:05:19.807  "abort": true,
00:05:19.807  "seek_hole": false,
00:05:19.807  "seek_data": false,
00:05:19.807  "copy": true,
00:05:19.807  "nvme_iov_md": false
00:05:19.807  },
00:05:19.807  "memory_domains": [
00:05:19.807  {
00:05:19.807  "dma_device_id": "system",
00:05:19.807  "dma_device_type": 1
00:05:19.807  },
00:05:19.807  {
00:05:19.807  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:05:19.807  "dma_device_type": 2
00:05:19.807  }
00:05:19.807  ],
00:05:19.807  "driver_specific": {}
00:05:19.807  }
00:05:19.807  ]'
00:05:19.807    10:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length
00:05:19.807   10:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:05:19.807   10:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0
00:05:19.807   10:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:19.807   10:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:05:19.807  [2024-11-20 10:02:14.767142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2
00:05:19.807  [2024-11-20 10:02:14.767203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:05:19.807  [2024-11-20 10:02:14.767242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000023d80
00:05:19.807  [2024-11-20 10:02:14.767262] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:05:19.807  [2024-11-20 10:02:14.769693] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:05:19.807  [2024-11-20 10:02:14.769736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:05:19.807  Passthru0
00:05:19.807   10:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:19.807    10:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:05:19.807    10:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:19.807    10:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:05:19.807    10:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:19.807   10:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[
00:05:19.807  {
00:05:19.807  "name": "Malloc2",
00:05:19.807  "aliases": [
00:05:19.807  "d2b9d6f6-5d10-4aa4-bc7b-d245d7c4c458"
00:05:19.807  ],
00:05:19.807  "product_name": "Malloc disk",
00:05:19.807  "block_size": 512,
00:05:19.807  "num_blocks": 16384,
00:05:19.807  "uuid": "d2b9d6f6-5d10-4aa4-bc7b-d245d7c4c458",
00:05:19.807  "assigned_rate_limits": {
00:05:19.807  "rw_ios_per_sec": 0,
00:05:19.807  "rw_mbytes_per_sec": 0,
00:05:19.807  "r_mbytes_per_sec": 0,
00:05:19.807  "w_mbytes_per_sec": 0
00:05:19.807  },
00:05:19.807  "claimed": true,
00:05:19.807  "claim_type": "exclusive_write",
00:05:19.807  "zoned": false,
00:05:19.807  "supported_io_types": {
00:05:19.807  "read": true,
00:05:19.807  "write": true,
00:05:19.807  "unmap": true,
00:05:19.807  "flush": true,
00:05:19.807  "reset": true,
00:05:19.807  "nvme_admin": false,
00:05:19.807  "nvme_io": false,
00:05:19.807  "nvme_io_md": false,
00:05:19.807  "write_zeroes": true,
00:05:19.807  "zcopy": true,
00:05:19.807  "get_zone_info": false,
00:05:19.807  "zone_management": false,
00:05:19.807  "zone_append": false,
00:05:19.807  "compare": false,
00:05:19.807  "compare_and_write": false,
00:05:19.807  "abort": true,
00:05:19.807  "seek_hole": false,
00:05:19.807  "seek_data": false,
00:05:19.807  "copy": true,
00:05:19.807  "nvme_iov_md": false
00:05:19.807  },
00:05:19.807  "memory_domains": [
00:05:19.807  {
00:05:19.807  "dma_device_id": "system",
00:05:19.807  "dma_device_type": 1
00:05:19.807  },
00:05:19.807  {
00:05:19.807  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:05:19.807  "dma_device_type": 2
00:05:19.807  }
00:05:19.807  ],
00:05:19.807  "driver_specific": {}
00:05:19.807  },
00:05:19.807  {
00:05:19.807  "name": "Passthru0",
00:05:19.807  "aliases": [
00:05:19.807  "7a121c70-2fff-5085-831c-a9fdb6c33367"
00:05:19.807  ],
00:05:19.807  "product_name": "passthru",
00:05:19.807  "block_size": 512,
00:05:19.807  "num_blocks": 16384,
00:05:19.807  "uuid": "7a121c70-2fff-5085-831c-a9fdb6c33367",
00:05:19.807  "assigned_rate_limits": {
00:05:19.807  "rw_ios_per_sec": 0,
00:05:19.807  "rw_mbytes_per_sec": 0,
00:05:19.807  "r_mbytes_per_sec": 0,
00:05:19.807  "w_mbytes_per_sec": 0
00:05:19.807  },
00:05:19.807  "claimed": false,
00:05:19.807  "zoned": false,
00:05:19.807  "supported_io_types": {
00:05:19.807  "read": true,
00:05:19.807  "write": true,
00:05:19.807  "unmap": true,
00:05:19.807  "flush": true,
00:05:19.807  "reset": true,
00:05:19.807  "nvme_admin": false,
00:05:19.807  "nvme_io": false,
00:05:19.807  "nvme_io_md": false,
00:05:19.807  "write_zeroes": true,
00:05:19.807  "zcopy": true,
00:05:19.807  "get_zone_info": false,
00:05:19.807  "zone_management": false,
00:05:19.807  "zone_append": false,
00:05:19.807  "compare": false,
00:05:19.807  "compare_and_write": false,
00:05:19.807  "abort": true,
00:05:19.807  "seek_hole": false,
00:05:19.807  "seek_data": false,
00:05:19.807  "copy": true,
00:05:19.807  "nvme_iov_md": false
00:05:19.807  },
00:05:19.807  "memory_domains": [
00:05:19.807  {
00:05:19.807  "dma_device_id": "system",
00:05:19.807  "dma_device_type": 1
00:05:19.807  },
00:05:19.807  {
00:05:19.807  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:05:19.807  "dma_device_type": 2
00:05:19.807  }
00:05:19.807  ],
00:05:19.807  "driver_specific": {
00:05:19.807  "passthru": {
00:05:19.807  "name": "Passthru0",
00:05:19.807  "base_bdev_name": "Malloc2"
00:05:19.807  }
00:05:19.807  }
00:05:19.807  }
00:05:19.807  ]'
00:05:19.807    10:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length
00:05:19.807   10:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:05:19.807   10:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:05:19.807   10:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:19.807   10:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:05:19.807   10:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:19.807   10:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2
00:05:19.807   10:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:19.807   10:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:05:19.807   10:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:19.807    10:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:05:19.807    10:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:19.807    10:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:05:19.808    10:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:19.808   10:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]'
00:05:19.808    10:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length
00:05:19.808   10:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:05:19.808  
00:05:19.808  real	0m0.239s
00:05:19.808  user	0m0.136s
00:05:19.808  sys	0m0.026s
00:05:19.808   10:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:19.808   10:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:05:19.808  ************************************
00:05:19.808  END TEST rpc_daemon_integrity
00:05:19.808  ************************************
00:05:19.808   10:02:14 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT
00:05:19.808   10:02:14 rpc -- rpc/rpc.sh@84 -- # killprocess 1735645
00:05:19.808   10:02:14 rpc -- common/autotest_common.sh@954 -- # '[' -z 1735645 ']'
00:05:19.808   10:02:14 rpc -- common/autotest_common.sh@958 -- # kill -0 1735645
00:05:19.808    10:02:14 rpc -- common/autotest_common.sh@959 -- # uname
00:05:19.808   10:02:14 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:19.808    10:02:14 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1735645
00:05:20.065   10:02:14 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:20.065   10:02:14 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:20.065   10:02:14 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1735645'
00:05:20.065  killing process with pid 1735645
00:05:20.065   10:02:14 rpc -- common/autotest_common.sh@973 -- # kill 1735645
00:05:20.065   10:02:14 rpc -- common/autotest_common.sh@978 -- # wait 1735645
00:05:21.964  
00:05:21.964  real	0m4.302s
00:05:21.964  user	0m4.808s
00:05:21.964  sys	0m0.800s
00:05:21.964   10:02:16 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:21.964   10:02:16 rpc -- common/autotest_common.sh@10 -- # set +x
00:05:21.964  ************************************
00:05:21.964  END TEST rpc
00:05:21.964  ************************************
00:05:21.964   10:02:16  -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/skip_rpc.sh
00:05:21.964   10:02:16  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:21.964   10:02:16  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:21.964   10:02:16  -- common/autotest_common.sh@10 -- # set +x
00:05:21.964  ************************************
00:05:21.964  START TEST skip_rpc
00:05:21.964  ************************************
00:05:21.964   10:02:16 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/skip_rpc.sh
00:05:21.964  * Looking for test storage...
00:05:21.964  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc
00:05:21.964    10:02:17 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:05:21.964     10:02:17 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:05:21.964     10:02:17 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version
00:05:22.222    10:02:17 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:05:22.222    10:02:17 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:22.222    10:02:17 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:22.222    10:02:17 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:22.222    10:02:17 skip_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:05:22.222    10:02:17 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:05:22.222    10:02:17 skip_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:05:22.222    10:02:17 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:05:22.222    10:02:17 skip_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:05:22.222    10:02:17 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:05:22.222    10:02:17 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:05:22.222    10:02:17 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:22.222    10:02:17 skip_rpc -- scripts/common.sh@344 -- # case "$op" in
00:05:22.222    10:02:17 skip_rpc -- scripts/common.sh@345 -- # : 1
00:05:22.222    10:02:17 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:22.222    10:02:17 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:22.222     10:02:17 skip_rpc -- scripts/common.sh@365 -- # decimal 1
00:05:22.222     10:02:17 skip_rpc -- scripts/common.sh@353 -- # local d=1
00:05:22.222     10:02:17 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:22.222     10:02:17 skip_rpc -- scripts/common.sh@355 -- # echo 1
00:05:22.222    10:02:17 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:05:22.222     10:02:17 skip_rpc -- scripts/common.sh@366 -- # decimal 2
00:05:22.222     10:02:17 skip_rpc -- scripts/common.sh@353 -- # local d=2
00:05:22.222     10:02:17 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:22.222     10:02:17 skip_rpc -- scripts/common.sh@355 -- # echo 2
00:05:22.222    10:02:17 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:05:22.222    10:02:17 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:22.222    10:02:17 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:22.222    10:02:17 skip_rpc -- scripts/common.sh@368 -- # return 0
00:05:22.222    10:02:17 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:22.222    10:02:17 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:05:22.222  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:22.222  		--rc genhtml_branch_coverage=1
00:05:22.222  		--rc genhtml_function_coverage=1
00:05:22.222  		--rc genhtml_legend=1
00:05:22.222  		--rc geninfo_all_blocks=1
00:05:22.222  		--rc geninfo_unexecuted_blocks=1
00:05:22.222  		
00:05:22.222  		'
00:05:22.222    10:02:17 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:05:22.222  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:22.223  		--rc genhtml_branch_coverage=1
00:05:22.223  		--rc genhtml_function_coverage=1
00:05:22.223  		--rc genhtml_legend=1
00:05:22.223  		--rc geninfo_all_blocks=1
00:05:22.223  		--rc geninfo_unexecuted_blocks=1
00:05:22.223  		
00:05:22.223  		'
00:05:22.223    10:02:17 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:05:22.223  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:22.223  		--rc genhtml_branch_coverage=1
00:05:22.223  		--rc genhtml_function_coverage=1
00:05:22.223  		--rc genhtml_legend=1
00:05:22.223  		--rc geninfo_all_blocks=1
00:05:22.223  		--rc geninfo_unexecuted_blocks=1
00:05:22.223  		
00:05:22.223  		'
00:05:22.223    10:02:17 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:05:22.223  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:22.223  		--rc genhtml_branch_coverage=1
00:05:22.223  		--rc genhtml_function_coverage=1
00:05:22.223  		--rc genhtml_legend=1
00:05:22.223  		--rc geninfo_all_blocks=1
00:05:22.223  		--rc geninfo_unexecuted_blocks=1
00:05:22.223  		
00:05:22.223  		'
00:05:22.223   10:02:17 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/config.json
00:05:22.223   10:02:17 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/log.txt
00:05:22.223   10:02:17 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc
00:05:22.223   10:02:17 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:22.223   10:02:17 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:22.223   10:02:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:22.223  ************************************
00:05:22.223  START TEST skip_rpc
00:05:22.223  ************************************
00:05:22.223   10:02:17 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc
00:05:22.223   10:02:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1736363
00:05:22.223   10:02:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1
00:05:22.223   10:02:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:05:22.223   10:02:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5
00:05:22.223  [2024-11-20 10:02:17.268856] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:05:22.223  [2024-11-20 10:02:17.268988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1736363 ]
00:05:22.481  [2024-11-20 10:02:17.399103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:22.481  [2024-11-20 10:02:17.512156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:27.779   10:02:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version
00:05:27.779   10:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0
00:05:27.779   10:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version
00:05:27.779   10:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:05:27.779   10:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:27.779    10:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:05:27.779   10:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:27.779   10:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version
00:05:27.779   10:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:27.779   10:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:27.779   10:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:05:27.779   10:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1
00:05:27.779   10:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:05:27.779   10:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:05:27.779   10:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:05:27.779   10:02:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT
00:05:27.779   10:02:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1736363
00:05:27.779   10:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1736363 ']'
00:05:27.779   10:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1736363
00:05:27.779    10:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname
00:05:27.779   10:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:27.779    10:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1736363
00:05:27.779   10:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:27.779   10:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:27.779   10:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1736363'
00:05:27.779  killing process with pid 1736363
00:05:27.779   10:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1736363
00:05:27.779   10:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1736363
00:05:29.164  
00:05:29.164  real	0m7.034s
00:05:29.164  user	0m6.556s
00:05:29.164  sys	0m0.477s
00:05:29.164   10:02:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:29.164   10:02:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:29.164  ************************************
00:05:29.164  END TEST skip_rpc
00:05:29.164  ************************************
00:05:29.164   10:02:24 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json
00:05:29.164   10:02:24 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:29.164   10:02:24 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:29.164   10:02:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:29.164  ************************************
00:05:29.164  START TEST skip_rpc_with_json
00:05:29.164  ************************************
00:05:29.164   10:02:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json
00:05:29.165   10:02:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config
00:05:29.165   10:02:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1737203
00:05:29.165   10:02:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:05:29.165   10:02:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:05:29.165   10:02:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1737203
00:05:29.165   10:02:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1737203 ']'
00:05:29.165   10:02:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:29.165   10:02:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:29.165   10:02:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:29.165  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:29.165   10:02:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:29.165   10:02:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:05:29.425  [2024-11-20 10:02:24.362406] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:05:29.425  [2024-11-20 10:02:24.362558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1737203 ]
00:05:29.425  [2024-11-20 10:02:24.504019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:29.685  [2024-11-20 10:02:24.621587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:30.629   10:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:30.629   10:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0
00:05:30.629   10:02:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp
00:05:30.629   10:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:30.629   10:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:05:30.629  [2024-11-20 10:02:25.440655] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist
00:05:30.629  request:
00:05:30.629  {
00:05:30.629  "trtype": "tcp",
00:05:30.629  "method": "nvmf_get_transports",
00:05:30.629  "req_id": 1
00:05:30.629  }
00:05:30.629  Got JSON-RPC error response
00:05:30.629  response:
00:05:30.629  {
00:05:30.629  "code": -19,
00:05:30.629  "message": "No such device"
00:05:30.629  }
00:05:30.629   10:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:05:30.629   10:02:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp
00:05:30.629   10:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:30.629   10:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:05:30.629  [2024-11-20 10:02:25.448815] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:05:30.629   10:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:30.629   10:02:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config
00:05:30.629   10:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:30.629   10:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:05:30.629   10:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:30.629   10:02:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/config.json
00:05:30.629  {
00:05:30.629  "subsystems": [
00:05:30.629  {
00:05:30.629  "subsystem": "fsdev",
00:05:30.629  "config": [
00:05:30.629  {
00:05:30.629  "method": "fsdev_set_opts",
00:05:30.629  "params": {
00:05:30.629  "fsdev_io_pool_size": 65535,
00:05:30.629  "fsdev_io_cache_size": 256
00:05:30.629  }
00:05:30.629  }
00:05:30.629  ]
00:05:30.629  },
00:05:30.629  {
00:05:30.629  "subsystem": "vfio_user_target",
00:05:30.629  "config": null
00:05:30.629  },
00:05:30.629  {
00:05:30.629  "subsystem": "keyring",
00:05:30.629  "config": []
00:05:30.629  },
00:05:30.629  {
00:05:30.629  "subsystem": "iobuf",
00:05:30.629  "config": [
00:05:30.629  {
00:05:30.629  "method": "iobuf_set_options",
00:05:30.629  "params": {
00:05:30.629  "small_pool_count": 8192,
00:05:30.629  "large_pool_count": 1024,
00:05:30.629  "small_bufsize": 8192,
00:05:30.629  "large_bufsize": 135168,
00:05:30.629  "enable_numa": false
00:05:30.629  }
00:05:30.629  }
00:05:30.629  ]
00:05:30.629  },
00:05:30.629  {
00:05:30.629  "subsystem": "sock",
00:05:30.629  "config": [
00:05:30.629  {
00:05:30.629  "method": "sock_set_default_impl",
00:05:30.629  "params": {
00:05:30.629  "impl_name": "posix"
00:05:30.629  }
00:05:30.629  },
00:05:30.629  {
00:05:30.629  "method": "sock_impl_set_options",
00:05:30.629  "params": {
00:05:30.629  "impl_name": "ssl",
00:05:30.629  "recv_buf_size": 4096,
00:05:30.629  "send_buf_size": 4096,
00:05:30.629  "enable_recv_pipe": true,
00:05:30.629  "enable_quickack": false,
00:05:30.629  "enable_placement_id": 0,
00:05:30.629  "enable_zerocopy_send_server": true,
00:05:30.629  "enable_zerocopy_send_client": false,
00:05:30.629  "zerocopy_threshold": 0,
00:05:30.629  "tls_version": 0,
00:05:30.629  "enable_ktls": false
00:05:30.629  }
00:05:30.629  },
00:05:30.629  {
00:05:30.629  "method": "sock_impl_set_options",
00:05:30.629  "params": {
00:05:30.629  "impl_name": "posix",
00:05:30.629  "recv_buf_size": 2097152,
00:05:30.629  "send_buf_size": 2097152,
00:05:30.629  "enable_recv_pipe": true,
00:05:30.629  "enable_quickack": false,
00:05:30.629  "enable_placement_id": 0,
00:05:30.629  "enable_zerocopy_send_server": true,
00:05:30.629  "enable_zerocopy_send_client": false,
00:05:30.629  "zerocopy_threshold": 0,
00:05:30.629  "tls_version": 0,
00:05:30.629  "enable_ktls": false
00:05:30.629  }
00:05:30.629  }
00:05:30.629  ]
00:05:30.629  },
00:05:30.629  {
00:05:30.629  "subsystem": "vmd",
00:05:30.629  "config": []
00:05:30.629  },
00:05:30.629  {
00:05:30.629  "subsystem": "accel",
00:05:30.629  "config": [
00:05:30.629  {
00:05:30.629  "method": "accel_set_options",
00:05:30.629  "params": {
00:05:30.629  "small_cache_size": 128,
00:05:30.629  "large_cache_size": 16,
00:05:30.630  "task_count": 2048,
00:05:30.630  "sequence_count": 2048,
00:05:30.630  "buf_count": 2048
00:05:30.630  }
00:05:30.630  }
00:05:30.630  ]
00:05:30.630  },
00:05:30.630  {
00:05:30.630  "subsystem": "bdev",
00:05:30.630  "config": [
00:05:30.630  {
00:05:30.630  "method": "bdev_set_options",
00:05:30.630  "params": {
00:05:30.630  "bdev_io_pool_size": 65535,
00:05:30.630  "bdev_io_cache_size": 256,
00:05:30.630  "bdev_auto_examine": true,
00:05:30.630  "iobuf_small_cache_size": 128,
00:05:30.630  "iobuf_large_cache_size": 16
00:05:30.630  }
00:05:30.630  },
00:05:30.630  {
00:05:30.630  "method": "bdev_raid_set_options",
00:05:30.630  "params": {
00:05:30.630  "process_window_size_kb": 1024,
00:05:30.630  "process_max_bandwidth_mb_sec": 0
00:05:30.630  }
00:05:30.630  },
00:05:30.630  {
00:05:30.630  "method": "bdev_iscsi_set_options",
00:05:30.630  "params": {
00:05:30.630  "timeout_sec": 30
00:05:30.630  }
00:05:30.630  },
00:05:30.630  {
00:05:30.630  "method": "bdev_nvme_set_options",
00:05:30.630  "params": {
00:05:30.630  "action_on_timeout": "none",
00:05:30.630  "timeout_us": 0,
00:05:30.630  "timeout_admin_us": 0,
00:05:30.630  "keep_alive_timeout_ms": 10000,
00:05:30.630  "arbitration_burst": 0,
00:05:30.630  "low_priority_weight": 0,
00:05:30.630  "medium_priority_weight": 0,
00:05:30.630  "high_priority_weight": 0,
00:05:30.630  "nvme_adminq_poll_period_us": 10000,
00:05:30.630  "nvme_ioq_poll_period_us": 0,
00:05:30.630  "io_queue_requests": 0,
00:05:30.630  "delay_cmd_submit": true,
00:05:30.630  "transport_retry_count": 4,
00:05:30.630  "bdev_retry_count": 3,
00:05:30.630  "transport_ack_timeout": 0,
00:05:30.630  "ctrlr_loss_timeout_sec": 0,
00:05:30.630  "reconnect_delay_sec": 0,
00:05:30.630  "fast_io_fail_timeout_sec": 0,
00:05:30.630  "disable_auto_failback": false,
00:05:30.630  "generate_uuids": false,
00:05:30.630  "transport_tos": 0,
00:05:30.630  "nvme_error_stat": false,
00:05:30.630  "rdma_srq_size": 0,
00:05:30.630  "io_path_stat": false,
00:05:30.630  "allow_accel_sequence": false,
00:05:30.630  "rdma_max_cq_size": 0,
00:05:30.630  "rdma_cm_event_timeout_ms": 0,
00:05:30.630  "dhchap_digests": [
00:05:30.630  "sha256",
00:05:30.630  "sha384",
00:05:30.630  "sha512"
00:05:30.630  ],
00:05:30.630  "dhchap_dhgroups": [
00:05:30.630  "null",
00:05:30.630  "ffdhe2048",
00:05:30.630  "ffdhe3072",
00:05:30.630  "ffdhe4096",
00:05:30.630  "ffdhe6144",
00:05:30.630  "ffdhe8192"
00:05:30.630  ]
00:05:30.630  }
00:05:30.630  },
00:05:30.630  {
00:05:30.630  "method": "bdev_nvme_set_hotplug",
00:05:30.630  "params": {
00:05:30.630  "period_us": 100000,
00:05:30.630  "enable": false
00:05:30.630  }
00:05:30.630  },
00:05:30.630  {
00:05:30.630  "method": "bdev_wait_for_examine"
00:05:30.630  }
00:05:30.630  ]
00:05:30.630  },
00:05:30.630  {
00:05:30.630  "subsystem": "scsi",
00:05:30.630  "config": null
00:05:30.630  },
00:05:30.630  {
00:05:30.630  "subsystem": "scheduler",
00:05:30.630  "config": [
00:05:30.630  {
00:05:30.630  "method": "framework_set_scheduler",
00:05:30.630  "params": {
00:05:30.630  "name": "static"
00:05:30.630  }
00:05:30.630  }
00:05:30.630  ]
00:05:30.630  },
00:05:30.630  {
00:05:30.630  "subsystem": "vhost_scsi",
00:05:30.630  "config": []
00:05:30.630  },
00:05:30.630  {
00:05:30.630  "subsystem": "vhost_blk",
00:05:30.630  "config": []
00:05:30.630  },
00:05:30.630  {
00:05:30.630  "subsystem": "ublk",
00:05:30.630  "config": []
00:05:30.630  },
00:05:30.630  {
00:05:30.630  "subsystem": "nbd",
00:05:30.630  "config": []
00:05:30.630  },
00:05:30.630  {
00:05:30.630  "subsystem": "nvmf",
00:05:30.630  "config": [
00:05:30.630  {
00:05:30.630  "method": "nvmf_set_config",
00:05:30.630  "params": {
00:05:30.630  "discovery_filter": "match_any",
00:05:30.630  "admin_cmd_passthru": {
00:05:30.630  "identify_ctrlr": false
00:05:30.630  },
00:05:30.630  "dhchap_digests": [
00:05:30.630  "sha256",
00:05:30.630  "sha384",
00:05:30.630  "sha512"
00:05:30.630  ],
00:05:30.630  "dhchap_dhgroups": [
00:05:30.630  "null",
00:05:30.630  "ffdhe2048",
00:05:30.630  "ffdhe3072",
00:05:30.630  "ffdhe4096",
00:05:30.630  "ffdhe6144",
00:05:30.630  "ffdhe8192"
00:05:30.630  ]
00:05:30.630  }
00:05:30.630  },
00:05:30.630  {
00:05:30.630  "method": "nvmf_set_max_subsystems",
00:05:30.630  "params": {
00:05:30.630  "max_subsystems": 1024
00:05:30.630  }
00:05:30.630  },
00:05:30.630  {
00:05:30.630  "method": "nvmf_set_crdt",
00:05:30.630  "params": {
00:05:30.630  "crdt1": 0,
00:05:30.630  "crdt2": 0,
00:05:30.630  "crdt3": 0
00:05:30.630  }
00:05:30.630  },
00:05:30.630  {
00:05:30.630  "method": "nvmf_create_transport",
00:05:30.630  "params": {
00:05:30.630  "trtype": "TCP",
00:05:30.630  "max_queue_depth": 128,
00:05:30.630  "max_io_qpairs_per_ctrlr": 127,
00:05:30.630  "in_capsule_data_size": 4096,
00:05:30.630  "max_io_size": 131072,
00:05:30.630  "io_unit_size": 131072,
00:05:30.630  "max_aq_depth": 128,
00:05:30.630  "num_shared_buffers": 511,
00:05:30.630  "buf_cache_size": 4294967295,
00:05:30.630  "dif_insert_or_strip": false,
00:05:30.630  "zcopy": false,
00:05:30.630  "c2h_success": true,
00:05:30.630  "sock_priority": 0,
00:05:30.630  "abort_timeout_sec": 1,
00:05:30.630  "ack_timeout": 0,
00:05:30.630  "data_wr_pool_size": 0
00:05:30.630  }
00:05:30.630  }
00:05:30.630  ]
00:05:30.630  },
00:05:30.630  {
00:05:30.630  "subsystem": "iscsi",
00:05:30.630  "config": [
00:05:30.630  {
00:05:30.630  "method": "iscsi_set_options",
00:05:30.630  "params": {
00:05:30.630  "node_base": "iqn.2016-06.io.spdk",
00:05:30.630  "max_sessions": 128,
00:05:30.630  "max_connections_per_session": 2,
00:05:30.630  "max_queue_depth": 64,
00:05:30.630  "default_time2wait": 2,
00:05:30.630  "default_time2retain": 20,
00:05:30.630  "first_burst_length": 8192,
00:05:30.630  "immediate_data": true,
00:05:30.630  "allow_duplicated_isid": false,
00:05:30.630  "error_recovery_level": 0,
00:05:30.630  "nop_timeout": 60,
00:05:30.630  "nop_in_interval": 30,
00:05:30.630  "disable_chap": false,
00:05:30.630  "require_chap": false,
00:05:30.630  "mutual_chap": false,
00:05:30.630  "chap_group": 0,
00:05:30.630  "max_large_datain_per_connection": 64,
00:05:30.630  "max_r2t_per_connection": 4,
00:05:30.630  "pdu_pool_size": 36864,
00:05:30.630  "immediate_data_pool_size": 16384,
00:05:30.630  "data_out_pool_size": 2048
00:05:30.630  }
00:05:30.630  }
00:05:30.630  ]
00:05:30.630  }
00:05:30.630  ]
00:05:30.630  }
00:05:30.630   10:02:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT
00:05:30.630   10:02:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1737203
00:05:30.630   10:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1737203 ']'
00:05:30.630   10:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1737203
00:05:30.630    10:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname
00:05:30.630   10:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:30.630    10:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1737203
00:05:30.630   10:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:30.630   10:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:30.630   10:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1737203'
00:05:30.630  killing process with pid 1737203
00:05:30.630   10:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1737203
00:05:30.630   10:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1737203
00:05:32.541   10:02:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1737612
00:05:32.541   10:02:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/config.json
00:05:32.541   10:02:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5
00:05:37.829   10:02:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1737612
00:05:37.829   10:02:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1737612 ']'
00:05:37.829   10:02:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1737612
00:05:37.829    10:02:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname
00:05:37.829   10:02:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:37.829    10:02:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1737612
00:05:37.829   10:02:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:37.829   10:02:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:37.829   10:02:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1737612'
00:05:37.829  killing process with pid 1737612
00:05:37.829   10:02:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1737612
00:05:37.829   10:02:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1737612
00:05:39.742   10:02:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/log.txt
00:05:39.743   10:02:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/log.txt
00:05:39.743  
00:05:39.743  real	0m10.406s
00:05:39.743  user	0m9.944s
00:05:39.743  sys	0m1.042s
00:05:39.743   10:02:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:39.743   10:02:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:05:39.743  ************************************
00:05:39.743  END TEST skip_rpc_with_json
00:05:39.743  ************************************
00:05:39.743   10:02:34 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay
00:05:39.743   10:02:34 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:39.743   10:02:34 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:39.743   10:02:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:39.743  ************************************
00:05:39.743  START TEST skip_rpc_with_delay
00:05:39.743  ************************************
00:05:39.743   10:02:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay
00:05:39.743   10:02:34 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:05:39.743   10:02:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0
00:05:39.743   10:02:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:05:39.743   10:02:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:05:39.743   10:02:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:39.743    10:02:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:05:39.743   10:02:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:39.743    10:02:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:05:39.743   10:02:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:39.743   10:02:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:05:39.743   10:02:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt ]]
00:05:39.743   10:02:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:05:39.743  [2024-11-20 10:02:34.814550] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started.
00:05:40.002   10:02:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1
00:05:40.002   10:02:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:05:40.002   10:02:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:05:40.002   10:02:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:05:40.002  
00:05:40.002  real	0m0.165s
00:05:40.002  user	0m0.084s
00:05:40.002  sys	0m0.080s
00:05:40.002   10:02:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:40.002   10:02:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x
00:05:40.002  ************************************
00:05:40.002  END TEST skip_rpc_with_delay
00:05:40.002  ************************************
00:05:40.002    10:02:34 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname
00:05:40.002   10:02:34 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']'
00:05:40.002   10:02:34 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init
00:05:40.002   10:02:34 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:40.002   10:02:34 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:40.002   10:02:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:40.002  ************************************
00:05:40.002  START TEST exit_on_failed_rpc_init
00:05:40.002  ************************************
00:05:40.002   10:02:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init
00:05:40.002   10:02:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1738584
00:05:40.002   10:02:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1738584
00:05:40.002   10:02:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:05:40.002   10:02:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1738584 ']'
00:05:40.002   10:02:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:40.002   10:02:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:40.002   10:02:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:40.002  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:40.002   10:02:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:40.002   10:02:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x
00:05:40.002  [2024-11-20 10:02:35.032299] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:05:40.002  [2024-11-20 10:02:35.032432] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1738584 ]
00:05:40.262  [2024-11-20 10:02:35.163354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:40.262  [2024-11-20 10:02:35.275809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:41.205   10:02:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:41.205   10:02:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0
00:05:41.205   10:02:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:05:41.205   10:02:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2
00:05:41.205   10:02:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0
00:05:41.205   10:02:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2
00:05:41.205   10:02:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:05:41.205   10:02:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:41.205    10:02:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:05:41.205   10:02:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:41.205    10:02:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:05:41.205   10:02:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:41.205   10:02:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:05:41.205   10:02:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt ]]
00:05:41.205   10:02:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2
00:05:41.205  [2024-11-20 10:02:36.184685] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:05:41.205  [2024-11-20 10:02:36.184837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1738729 ]
00:05:41.205  [2024-11-20 10:02:36.314755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:41.466  [2024-11-20 10:02:36.438258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:05:41.466  [2024-11-20 10:02:36.438419] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another.
00:05:41.466  [2024-11-20 10:02:36.438452] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock
00:05:41.466  [2024-11-20 10:02:36.438471] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:05:41.728   10:02:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234
00:05:41.728   10:02:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:05:41.728   10:02:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106
00:05:41.728   10:02:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in
00:05:41.728   10:02:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1
00:05:41.728   10:02:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:05:41.728   10:02:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT
00:05:41.728   10:02:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1738584
00:05:41.728   10:02:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1738584 ']'
00:05:41.728   10:02:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1738584
00:05:41.728    10:02:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname
00:05:41.728   10:02:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:41.728    10:02:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1738584
00:05:41.728   10:02:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:41.728   10:02:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:41.728   10:02:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1738584'
00:05:41.728  killing process with pid 1738584
00:05:41.728   10:02:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1738584
00:05:41.728   10:02:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1738584
00:05:43.757  
00:05:43.757  real	0m3.847s
00:05:43.757  user	0m4.267s
00:05:43.757  sys	0m0.734s
00:05:43.757   10:02:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:43.757   10:02:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x
00:05:43.757  ************************************
00:05:43.757  END TEST exit_on_failed_rpc_init
00:05:43.757  ************************************
00:05:43.757   10:02:38 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/config.json
00:05:43.757  
00:05:43.757  real	0m21.809s
00:05:43.757  user	0m21.033s
00:05:43.757  sys	0m2.527s
00:05:43.757   10:02:38 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:43.757   10:02:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:43.757  ************************************
00:05:43.757  END TEST skip_rpc
00:05:43.757  ************************************
00:05:43.757   10:02:38  -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_client/rpc_client.sh
00:05:43.757   10:02:38  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:43.757   10:02:38  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:43.757   10:02:38  -- common/autotest_common.sh@10 -- # set +x
00:05:43.757  ************************************
00:05:43.757  START TEST rpc_client
00:05:43.757  ************************************
00:05:43.757   10:02:38 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_client/rpc_client.sh
00:05:44.016  * Looking for test storage...
00:05:44.016  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_client
00:05:44.016    10:02:38 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:05:44.016     10:02:38 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version
00:05:44.016     10:02:38 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:05:44.016    10:02:38 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:05:44.016    10:02:38 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:44.016    10:02:38 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:44.016    10:02:38 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:44.016    10:02:38 rpc_client -- scripts/common.sh@336 -- # IFS=.-:
00:05:44.016    10:02:38 rpc_client -- scripts/common.sh@336 -- # read -ra ver1
00:05:44.016    10:02:38 rpc_client -- scripts/common.sh@337 -- # IFS=.-:
00:05:44.016    10:02:38 rpc_client -- scripts/common.sh@337 -- # read -ra ver2
00:05:44.016    10:02:38 rpc_client -- scripts/common.sh@338 -- # local 'op=<'
00:05:44.016    10:02:38 rpc_client -- scripts/common.sh@340 -- # ver1_l=2
00:05:44.016    10:02:38 rpc_client -- scripts/common.sh@341 -- # ver2_l=1
00:05:44.016    10:02:38 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:44.016    10:02:38 rpc_client -- scripts/common.sh@344 -- # case "$op" in
00:05:44.016    10:02:38 rpc_client -- scripts/common.sh@345 -- # : 1
00:05:44.016    10:02:38 rpc_client -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:44.016    10:02:38 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:44.016     10:02:38 rpc_client -- scripts/common.sh@365 -- # decimal 1
00:05:44.016     10:02:38 rpc_client -- scripts/common.sh@353 -- # local d=1
00:05:44.016     10:02:38 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:44.016     10:02:38 rpc_client -- scripts/common.sh@355 -- # echo 1
00:05:44.016    10:02:38 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1
00:05:44.016     10:02:38 rpc_client -- scripts/common.sh@366 -- # decimal 2
00:05:44.016     10:02:38 rpc_client -- scripts/common.sh@353 -- # local d=2
00:05:44.016     10:02:38 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:44.016     10:02:38 rpc_client -- scripts/common.sh@355 -- # echo 2
00:05:44.016    10:02:38 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2
00:05:44.016    10:02:38 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:44.016    10:02:38 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:44.016    10:02:38 rpc_client -- scripts/common.sh@368 -- # return 0
00:05:44.016    10:02:38 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:44.016    10:02:38 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:05:44.016  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:44.016  		--rc genhtml_branch_coverage=1
00:05:44.016  		--rc genhtml_function_coverage=1
00:05:44.016  		--rc genhtml_legend=1
00:05:44.016  		--rc geninfo_all_blocks=1
00:05:44.016  		--rc geninfo_unexecuted_blocks=1
00:05:44.016  		
00:05:44.016  		'
00:05:44.016    10:02:38 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:05:44.016  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:44.016  		--rc genhtml_branch_coverage=1
00:05:44.016  		--rc genhtml_function_coverage=1
00:05:44.016  		--rc genhtml_legend=1
00:05:44.016  		--rc geninfo_all_blocks=1
00:05:44.016  		--rc geninfo_unexecuted_blocks=1
00:05:44.016  		
00:05:44.016  		'
00:05:44.016    10:02:38 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:05:44.016  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:44.016  		--rc genhtml_branch_coverage=1
00:05:44.016  		--rc genhtml_function_coverage=1
00:05:44.016  		--rc genhtml_legend=1
00:05:44.016  		--rc geninfo_all_blocks=1
00:05:44.016  		--rc geninfo_unexecuted_blocks=1
00:05:44.016  		
00:05:44.016  		'
00:05:44.016    10:02:38 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:05:44.016  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:44.016  		--rc genhtml_branch_coverage=1
00:05:44.016  		--rc genhtml_function_coverage=1
00:05:44.016  		--rc genhtml_legend=1
00:05:44.016  		--rc geninfo_all_blocks=1
00:05:44.016  		--rc geninfo_unexecuted_blocks=1
00:05:44.016  		
00:05:44.016  		'
00:05:44.016   10:02:38 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_client/rpc_client_test
00:05:44.016  OK
00:05:44.016   10:02:39 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT
00:05:44.016  
00:05:44.016  real	0m0.194s
00:05:44.016  user	0m0.116s
00:05:44.016  sys	0m0.086s
00:05:44.016   10:02:39 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:44.016   10:02:39 rpc_client -- common/autotest_common.sh@10 -- # set +x
00:05:44.016  ************************************
00:05:44.016  END TEST rpc_client
00:05:44.016  ************************************
00:05:44.016   10:02:39  -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/json_config.sh
00:05:44.016   10:02:39  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:44.016   10:02:39  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:44.016   10:02:39  -- common/autotest_common.sh@10 -- # set +x
00:05:44.016  ************************************
00:05:44.016  START TEST json_config
00:05:44.016  ************************************
00:05:44.016   10:02:39 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/json_config.sh
00:05:44.016    10:02:39 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:05:44.016     10:02:39 json_config -- common/autotest_common.sh@1693 -- # lcov --version
00:05:44.016     10:02:39 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:05:44.276    10:02:39 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:05:44.276    10:02:39 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:44.276    10:02:39 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:44.276    10:02:39 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:44.276    10:02:39 json_config -- scripts/common.sh@336 -- # IFS=.-:
00:05:44.276    10:02:39 json_config -- scripts/common.sh@336 -- # read -ra ver1
00:05:44.276    10:02:39 json_config -- scripts/common.sh@337 -- # IFS=.-:
00:05:44.276    10:02:39 json_config -- scripts/common.sh@337 -- # read -ra ver2
00:05:44.276    10:02:39 json_config -- scripts/common.sh@338 -- # local 'op=<'
00:05:44.276    10:02:39 json_config -- scripts/common.sh@340 -- # ver1_l=2
00:05:44.276    10:02:39 json_config -- scripts/common.sh@341 -- # ver2_l=1
00:05:44.276    10:02:39 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:44.276    10:02:39 json_config -- scripts/common.sh@344 -- # case "$op" in
00:05:44.276    10:02:39 json_config -- scripts/common.sh@345 -- # : 1
00:05:44.276    10:02:39 json_config -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:44.276    10:02:39 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:44.276     10:02:39 json_config -- scripts/common.sh@365 -- # decimal 1
00:05:44.276     10:02:39 json_config -- scripts/common.sh@353 -- # local d=1
00:05:44.276     10:02:39 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:44.276     10:02:39 json_config -- scripts/common.sh@355 -- # echo 1
00:05:44.276    10:02:39 json_config -- scripts/common.sh@365 -- # ver1[v]=1
00:05:44.276     10:02:39 json_config -- scripts/common.sh@366 -- # decimal 2
00:05:44.276     10:02:39 json_config -- scripts/common.sh@353 -- # local d=2
00:05:44.276     10:02:39 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:44.276     10:02:39 json_config -- scripts/common.sh@355 -- # echo 2
00:05:44.276    10:02:39 json_config -- scripts/common.sh@366 -- # ver2[v]=2
00:05:44.276    10:02:39 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:44.276    10:02:39 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:44.276    10:02:39 json_config -- scripts/common.sh@368 -- # return 0
00:05:44.276    10:02:39 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:44.276    10:02:39 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:05:44.276  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:44.276  		--rc genhtml_branch_coverage=1
00:05:44.276  		--rc genhtml_function_coverage=1
00:05:44.276  		--rc genhtml_legend=1
00:05:44.276  		--rc geninfo_all_blocks=1
00:05:44.276  		--rc geninfo_unexecuted_blocks=1
00:05:44.276  		
00:05:44.276  		'
00:05:44.276    10:02:39 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:05:44.276  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:44.276  		--rc genhtml_branch_coverage=1
00:05:44.276  		--rc genhtml_function_coverage=1
00:05:44.276  		--rc genhtml_legend=1
00:05:44.276  		--rc geninfo_all_blocks=1
00:05:44.276  		--rc geninfo_unexecuted_blocks=1
00:05:44.276  		
00:05:44.276  		'
00:05:44.276    10:02:39 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:05:44.276  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:44.276  		--rc genhtml_branch_coverage=1
00:05:44.276  		--rc genhtml_function_coverage=1
00:05:44.276  		--rc genhtml_legend=1
00:05:44.276  		--rc geninfo_all_blocks=1
00:05:44.276  		--rc geninfo_unexecuted_blocks=1
00:05:44.276  		
00:05:44.276  		'
00:05:44.276    10:02:39 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:05:44.276  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:44.276  		--rc genhtml_branch_coverage=1
00:05:44.276  		--rc genhtml_function_coverage=1
00:05:44.276  		--rc genhtml_legend=1
00:05:44.276  		--rc geninfo_all_blocks=1
00:05:44.276  		--rc geninfo_unexecuted_blocks=1
00:05:44.276  		
00:05:44.276  		'
00:05:44.276   10:02:39 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/nvmf/common.sh
00:05:44.276     10:02:39 json_config -- nvmf/common.sh@7 -- # uname -s
00:05:44.276    10:02:39 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:05:44.276    10:02:39 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:05:44.276    10:02:39 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:05:44.276    10:02:39 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:05:44.276    10:02:39 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:05:44.276    10:02:39 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:05:44.276    10:02:39 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:05:44.276    10:02:39 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:05:44.276    10:02:39 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:05:44.276     10:02:39 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:05:44.276    10:02:39 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92072e00-b2cb-e211-b423-001e67898f4e
00:05:44.276    10:02:39 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=92072e00-b2cb-e211-b423-001e67898f4e
00:05:44.277    10:02:39 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:05:44.277    10:02:39 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:05:44.277    10:02:39 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:05:44.277    10:02:39 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:05:44.277    10:02:39 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/common.sh
00:05:44.277     10:02:39 json_config -- scripts/common.sh@15 -- # shopt -s extglob
00:05:44.277     10:02:39 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:05:44.277     10:02:39 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:05:44.277     10:02:39 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:05:44.277      10:02:39 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:44.277      10:02:39 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:44.277      10:02:39 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:44.277      10:02:39 json_config -- paths/export.sh@5 -- # export PATH
00:05:44.277      10:02:39 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:44.277    10:02:39 json_config -- nvmf/common.sh@51 -- # : 0
00:05:44.277    10:02:39 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:05:44.277    10:02:39 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:05:44.277    10:02:39 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:05:44.277    10:02:39 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:05:44.277    10:02:39 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:05:44.277    10:02:39 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:05:44.277  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:05:44.277    10:02:39 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:05:44.277    10:02:39 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:05:44.277    10:02:39 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0
00:05:44.277   10:02:39 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/common.sh
00:05:44.277   10:02:39 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]]
00:05:44.277   10:02:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]]
00:05:44.277   10:02:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]]
00:05:44.277   10:02:39 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + 	SPDK_TEST_ISCSI + 	SPDK_TEST_NVMF + 	SPDK_TEST_VHOST + 	SPDK_TEST_VHOST_INIT + 	SPDK_TEST_RBD == 0 ))
00:05:44.277   10:02:39 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests'
00:05:44.277  WARNING: No tests are enabled so not running JSON configuration tests
00:05:44.277   10:02:39 json_config -- json_config/json_config.sh@28 -- # exit 0
00:05:44.277  
00:05:44.277  real	0m0.140s
00:05:44.277  user	0m0.096s
00:05:44.277  sys	0m0.048s
00:05:44.277   10:02:39 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:44.277   10:02:39 json_config -- common/autotest_common.sh@10 -- # set +x
00:05:44.277  ************************************
00:05:44.277  END TEST json_config
00:05:44.277  ************************************
00:05:44.277   10:02:39  -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/json_config_extra_key.sh
00:05:44.277   10:02:39  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:44.277   10:02:39  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:44.277   10:02:39  -- common/autotest_common.sh@10 -- # set +x
00:05:44.277  ************************************
00:05:44.277  START TEST json_config_extra_key
00:05:44.277  ************************************
00:05:44.277   10:02:39 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/json_config_extra_key.sh
00:05:44.277    10:02:39 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:05:44.277     10:02:39 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version
00:05:44.277     10:02:39 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:05:44.536    10:02:39 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:05:44.536    10:02:39 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:44.536    10:02:39 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:44.536    10:02:39 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:44.536    10:02:39 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-:
00:05:44.536    10:02:39 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1
00:05:44.536    10:02:39 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-:
00:05:44.536    10:02:39 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2
00:05:44.536    10:02:39 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<'
00:05:44.536    10:02:39 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2
00:05:44.536    10:02:39 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1
00:05:44.536    10:02:39 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:44.536    10:02:39 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in
00:05:44.536    10:02:39 json_config_extra_key -- scripts/common.sh@345 -- # : 1
00:05:44.536    10:02:39 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:44.536    10:02:39 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:44.536     10:02:39 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1
00:05:44.536     10:02:39 json_config_extra_key -- scripts/common.sh@353 -- # local d=1
00:05:44.536     10:02:39 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:44.536     10:02:39 json_config_extra_key -- scripts/common.sh@355 -- # echo 1
00:05:44.536    10:02:39 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1
00:05:44.536     10:02:39 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2
00:05:44.536     10:02:39 json_config_extra_key -- scripts/common.sh@353 -- # local d=2
00:05:44.536     10:02:39 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:44.536     10:02:39 json_config_extra_key -- scripts/common.sh@355 -- # echo 2
00:05:44.536    10:02:39 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2
00:05:44.536    10:02:39 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:44.536    10:02:39 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:44.536    10:02:39 json_config_extra_key -- scripts/common.sh@368 -- # return 0
00:05:44.536    10:02:39 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:44.536    10:02:39 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:05:44.536  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:44.536  		--rc genhtml_branch_coverage=1
00:05:44.536  		--rc genhtml_function_coverage=1
00:05:44.536  		--rc genhtml_legend=1
00:05:44.536  		--rc geninfo_all_blocks=1
00:05:44.536  		--rc geninfo_unexecuted_blocks=1
00:05:44.536  		
00:05:44.536  		'
00:05:44.536    10:02:39 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:05:44.536  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:44.536  		--rc genhtml_branch_coverage=1
00:05:44.536  		--rc genhtml_function_coverage=1
00:05:44.536  		--rc genhtml_legend=1
00:05:44.536  		--rc geninfo_all_blocks=1
00:05:44.536  		--rc geninfo_unexecuted_blocks=1
00:05:44.536  		
00:05:44.536  		'
00:05:44.536    10:02:39 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:05:44.536  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:44.536  		--rc genhtml_branch_coverage=1
00:05:44.536  		--rc genhtml_function_coverage=1
00:05:44.536  		--rc genhtml_legend=1
00:05:44.536  		--rc geninfo_all_blocks=1
00:05:44.536  		--rc geninfo_unexecuted_blocks=1
00:05:44.536  		
00:05:44.536  		'
00:05:44.536    10:02:39 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:05:44.536  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:44.536  		--rc genhtml_branch_coverage=1
00:05:44.536  		--rc genhtml_function_coverage=1
00:05:44.536  		--rc genhtml_legend=1
00:05:44.536  		--rc geninfo_all_blocks=1
00:05:44.536  		--rc geninfo_unexecuted_blocks=1
00:05:44.537  		
00:05:44.537  		'
00:05:44.537   10:02:39 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/nvmf/common.sh
00:05:44.537     10:02:39 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s
00:05:44.537    10:02:39 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:05:44.537    10:02:39 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:05:44.537    10:02:39 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:05:44.537    10:02:39 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:05:44.537    10:02:39 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:05:44.537    10:02:39 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:05:44.537    10:02:39 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:05:44.537    10:02:39 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:05:44.537    10:02:39 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:05:44.537     10:02:39 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:05:44.537    10:02:39 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92072e00-b2cb-e211-b423-001e67898f4e
00:05:44.537    10:02:39 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=92072e00-b2cb-e211-b423-001e67898f4e
00:05:44.537    10:02:39 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:05:44.537    10:02:39 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:05:44.537    10:02:39 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:05:44.537    10:02:39 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:05:44.537    10:02:39 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/common.sh
00:05:44.537     10:02:39 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob
00:05:44.537     10:02:39 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:05:44.537     10:02:39 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:05:44.537     10:02:39 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:05:44.537      10:02:39 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:44.537      10:02:39 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:44.537      10:02:39 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:44.537      10:02:39 json_config_extra_key -- paths/export.sh@5 -- # export PATH
00:05:44.537      10:02:39 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:44.537    10:02:39 json_config_extra_key -- nvmf/common.sh@51 -- # : 0
00:05:44.537    10:02:39 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:05:44.537    10:02:39 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:05:44.537    10:02:39 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:05:44.537    10:02:39 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:05:44.537    10:02:39 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:05:44.537    10:02:39 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:05:44.537  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:05:44.537    10:02:39 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:05:44.537    10:02:39 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:05:44.537    10:02:39 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0
00:05:44.537   10:02:39 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/common.sh
00:05:44.537   10:02:39 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='')
00:05:44.537   10:02:39 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid
00:05:44.537   10:02:39 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock')
00:05:44.537   10:02:39 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket
00:05:44.537   10:02:39 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024')
00:05:44.537   10:02:39 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params
00:05:44.537   10:02:39 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/extra_key.json')
00:05:44.537   10:02:39 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path
00:05:44.537   10:02:39 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:05:44.537   10:02:39 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...'
00:05:44.537  INFO: launching applications...
00:05:44.537   10:02:39 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/extra_key.json
00:05:44.537   10:02:39 json_config_extra_key -- json_config/common.sh@9 -- # local app=target
00:05:44.537   10:02:39 json_config_extra_key -- json_config/common.sh@10 -- # shift
00:05:44.537   10:02:39 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]]
00:05:44.537   10:02:39 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]]
00:05:44.537   10:02:39 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params=
00:05:44.537   10:02:39 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:05:44.537   10:02:39 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:05:44.537   10:02:39 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1739304
00:05:44.537   10:02:39 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/extra_key.json
00:05:44.537   10:02:39 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...'
00:05:44.537  Waiting for target to run...
00:05:44.537   10:02:39 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1739304 /var/tmp/spdk_tgt.sock
00:05:44.537   10:02:39 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1739304 ']'
00:05:44.537   10:02:39 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:05:44.537   10:02:39 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:44.537   10:02:39 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:05:44.537  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:05:44.537   10:02:39 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:44.537   10:02:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x
00:05:44.537  [2024-11-20 10:02:39.546612] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:05:44.537  [2024-11-20 10:02:39.546763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1739304 ]
00:05:45.105  [2024-11-20 10:02:39.977927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:45.105  [2024-11-20 10:02:40.083623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:45.714   10:02:40 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:45.714   10:02:40 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0
00:05:45.714   10:02:40 json_config_extra_key -- json_config/common.sh@26 -- # echo ''
00:05:45.714  
00:05:45.714   10:02:40 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...'
00:05:45.714  INFO: shutting down applications...
00:05:45.714   10:02:40 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target
00:05:45.714   10:02:40 json_config_extra_key -- json_config/common.sh@31 -- # local app=target
00:05:45.714   10:02:40 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]]
00:05:45.714   10:02:40 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1739304 ]]
00:05:45.714   10:02:40 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1739304
00:05:45.714   10:02:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 ))
00:05:45.714   10:02:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:05:45.714   10:02:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1739304
00:05:45.714   10:02:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:05:46.281   10:02:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:05:46.281   10:02:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:05:46.281   10:02:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1739304
00:05:46.281   10:02:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:05:46.847   10:02:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:05:46.847   10:02:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:05:46.847   10:02:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1739304
00:05:46.847   10:02:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:05:47.413   10:02:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:05:47.413   10:02:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:05:47.413   10:02:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1739304
00:05:47.413   10:02:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:05:47.671   10:02:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:05:47.671   10:02:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:05:47.671   10:02:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1739304
00:05:47.671   10:02:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:05:48.237   10:02:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:05:48.237   10:02:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:05:48.237   10:02:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1739304
00:05:48.237   10:02:43 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]=
00:05:48.237   10:02:43 json_config_extra_key -- json_config/common.sh@43 -- # break
00:05:48.237   10:02:43 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]]
00:05:48.237   10:02:43 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done'
00:05:48.237  SPDK target shutdown done
00:05:48.237   10:02:43 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success
00:05:48.237  Success
00:05:48.237  
00:05:48.237  real	0m3.996s
00:05:48.237  user	0m3.681s
00:05:48.237  sys	0m0.664s
00:05:48.237   10:02:43 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:48.237   10:02:43 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x
00:05:48.237  ************************************
00:05:48.237  END TEST json_config_extra_key
00:05:48.237  ************************************
00:05:48.237   10:02:43  -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:05:48.237   10:02:43  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:48.237   10:02:43  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:48.237   10:02:43  -- common/autotest_common.sh@10 -- # set +x
00:05:48.237  ************************************
00:05:48.237  START TEST alias_rpc
00:05:48.237  ************************************
00:05:48.237   10:02:43 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:05:48.495  * Looking for test storage...
00:05:48.495  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/alias_rpc
00:05:48.495    10:02:43 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:05:48.495     10:02:43 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version
00:05:48.495     10:02:43 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:05:48.495    10:02:43 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:05:48.495    10:02:43 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:48.495    10:02:43 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:48.495    10:02:43 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:48.495    10:02:43 alias_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:05:48.495    10:02:43 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:05:48.495    10:02:43 alias_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:05:48.495    10:02:43 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:05:48.495    10:02:43 alias_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:05:48.495    10:02:43 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:05:48.495    10:02:43 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:05:48.495    10:02:43 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:48.495    10:02:43 alias_rpc -- scripts/common.sh@344 -- # case "$op" in
00:05:48.495    10:02:43 alias_rpc -- scripts/common.sh@345 -- # : 1
00:05:48.495    10:02:43 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:48.495    10:02:43 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:48.495     10:02:43 alias_rpc -- scripts/common.sh@365 -- # decimal 1
00:05:48.495     10:02:43 alias_rpc -- scripts/common.sh@353 -- # local d=1
00:05:48.495     10:02:43 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:48.495     10:02:43 alias_rpc -- scripts/common.sh@355 -- # echo 1
00:05:48.495    10:02:43 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:05:48.495     10:02:43 alias_rpc -- scripts/common.sh@366 -- # decimal 2
00:05:48.495     10:02:43 alias_rpc -- scripts/common.sh@353 -- # local d=2
00:05:48.495     10:02:43 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:48.495     10:02:43 alias_rpc -- scripts/common.sh@355 -- # echo 2
00:05:48.496    10:02:43 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:05:48.496    10:02:43 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:48.496    10:02:43 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:48.496    10:02:43 alias_rpc -- scripts/common.sh@368 -- # return 0
00:05:48.496    10:02:43 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:48.496    10:02:43 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:05:48.496  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:48.496  		--rc genhtml_branch_coverage=1
00:05:48.496  		--rc genhtml_function_coverage=1
00:05:48.496  		--rc genhtml_legend=1
00:05:48.496  		--rc geninfo_all_blocks=1
00:05:48.496  		--rc geninfo_unexecuted_blocks=1
00:05:48.496  		
00:05:48.496  		'
00:05:48.496    10:02:43 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:05:48.496  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:48.496  		--rc genhtml_branch_coverage=1
00:05:48.496  		--rc genhtml_function_coverage=1
00:05:48.496  		--rc genhtml_legend=1
00:05:48.496  		--rc geninfo_all_blocks=1
00:05:48.496  		--rc geninfo_unexecuted_blocks=1
00:05:48.496  		
00:05:48.496  		'
00:05:48.496    10:02:43 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:05:48.496  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:48.496  		--rc genhtml_branch_coverage=1
00:05:48.496  		--rc genhtml_function_coverage=1
00:05:48.496  		--rc genhtml_legend=1
00:05:48.496  		--rc geninfo_all_blocks=1
00:05:48.496  		--rc geninfo_unexecuted_blocks=1
00:05:48.496  		
00:05:48.496  		'
00:05:48.496    10:02:43 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:05:48.496  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:48.496  		--rc genhtml_branch_coverage=1
00:05:48.496  		--rc genhtml_function_coverage=1
00:05:48.496  		--rc genhtml_legend=1
00:05:48.496  		--rc geninfo_all_blocks=1
00:05:48.496  		--rc geninfo_unexecuted_blocks=1
00:05:48.496  		
00:05:48.496  		'
00:05:48.496   10:02:43 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR
00:05:48.496   10:02:43 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1739774
00:05:48.496   10:02:43 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:05:48.496   10:02:43 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1739774
00:05:48.496   10:02:43 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1739774 ']'
00:05:48.496   10:02:43 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:48.496   10:02:43 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:48.496   10:02:43 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:48.496  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:48.496   10:02:43 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:48.496   10:02:43 alias_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:48.496  [2024-11-20 10:02:43.588422] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:05:48.496  [2024-11-20 10:02:43.588574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1739774 ]
00:05:48.755  [2024-11-20 10:02:43.719543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:48.755  [2024-11-20 10:02:43.832064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:49.690   10:02:44 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:49.690   10:02:44 alias_rpc -- common/autotest_common.sh@868 -- # return 0
00:05:49.690   10:02:44 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py load_config -i
00:05:49.948   10:02:44 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1739774
00:05:49.948   10:02:44 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1739774 ']'
00:05:49.948   10:02:44 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1739774
00:05:49.948    10:02:44 alias_rpc -- common/autotest_common.sh@959 -- # uname
00:05:49.948   10:02:44 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:49.948    10:02:44 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1739774
00:05:49.948   10:02:44 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:49.948   10:02:44 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:49.948   10:02:44 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1739774'
00:05:49.948  killing process with pid 1739774
00:05:49.948   10:02:44 alias_rpc -- common/autotest_common.sh@973 -- # kill 1739774
00:05:49.948   10:02:44 alias_rpc -- common/autotest_common.sh@978 -- # wait 1739774
00:05:52.480  
00:05:52.480  real	0m3.667s
00:05:52.480  user	0m3.817s
00:05:52.480  sys	0m0.646s
00:05:52.480   10:02:46 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:52.480   10:02:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:52.480  ************************************
00:05:52.480  END TEST alias_rpc
00:05:52.480  ************************************
00:05:52.480   10:02:47  -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]]
00:05:52.480   10:02:47  -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/spdkcli/tcp.sh
00:05:52.480   10:02:47  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:52.480   10:02:47  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:52.480   10:02:47  -- common/autotest_common.sh@10 -- # set +x
00:05:52.480  ************************************
00:05:52.480  START TEST spdkcli_tcp
00:05:52.480  ************************************
00:05:52.480   10:02:47 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/spdkcli/tcp.sh
00:05:52.480  * Looking for test storage...
00:05:52.480  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/spdkcli
00:05:52.480    10:02:47 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:05:52.480     10:02:47 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version
00:05:52.480     10:02:47 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:05:52.480    10:02:47 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:05:52.480    10:02:47 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:52.480    10:02:47 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:52.480    10:02:47 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:52.480    10:02:47 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-:
00:05:52.480    10:02:47 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1
00:05:52.480    10:02:47 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-:
00:05:52.480    10:02:47 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2
00:05:52.480    10:02:47 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<'
00:05:52.480    10:02:47 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2
00:05:52.480    10:02:47 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1
00:05:52.480    10:02:47 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:52.480    10:02:47 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in
00:05:52.480    10:02:47 spdkcli_tcp -- scripts/common.sh@345 -- # : 1
00:05:52.480    10:02:47 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:52.480    10:02:47 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:52.480     10:02:47 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1
00:05:52.480     10:02:47 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1
00:05:52.480     10:02:47 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:52.480     10:02:47 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1
00:05:52.480    10:02:47 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1
00:05:52.480     10:02:47 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2
00:05:52.480     10:02:47 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2
00:05:52.480     10:02:47 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:52.480     10:02:47 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2
00:05:52.480    10:02:47 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2
00:05:52.480    10:02:47 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:52.480    10:02:47 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:52.480    10:02:47 spdkcli_tcp -- scripts/common.sh@368 -- # return 0
00:05:52.480    10:02:47 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:52.480    10:02:47 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:05:52.480  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:52.480  		--rc genhtml_branch_coverage=1
00:05:52.480  		--rc genhtml_function_coverage=1
00:05:52.480  		--rc genhtml_legend=1
00:05:52.480  		--rc geninfo_all_blocks=1
00:05:52.480  		--rc geninfo_unexecuted_blocks=1
00:05:52.480  		
00:05:52.480  		'
00:05:52.480    10:02:47 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:05:52.480  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:52.480  		--rc genhtml_branch_coverage=1
00:05:52.480  		--rc genhtml_function_coverage=1
00:05:52.480  		--rc genhtml_legend=1
00:05:52.480  		--rc geninfo_all_blocks=1
00:05:52.480  		--rc geninfo_unexecuted_blocks=1
00:05:52.480  		
00:05:52.480  		'
00:05:52.480    10:02:47 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:05:52.480  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:52.480  		--rc genhtml_branch_coverage=1
00:05:52.480  		--rc genhtml_function_coverage=1
00:05:52.480  		--rc genhtml_legend=1
00:05:52.480  		--rc geninfo_all_blocks=1
00:05:52.480  		--rc geninfo_unexecuted_blocks=1
00:05:52.480  		
00:05:52.480  		'
00:05:52.480    10:02:47 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:05:52.480  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:52.480  		--rc genhtml_branch_coverage=1
00:05:52.480  		--rc genhtml_function_coverage=1
00:05:52.480  		--rc genhtml_legend=1
00:05:52.480  		--rc geninfo_all_blocks=1
00:05:52.480  		--rc geninfo_unexecuted_blocks=1
00:05:52.480  		
00:05:52.480  		'
00:05:52.480   10:02:47 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/spdkcli/common.sh
00:05:52.480    10:02:47 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/spdkcli/spdkcli_job.py
00:05:52.480    10:02:47 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/clear_config.py
00:05:52.480   10:02:47 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1
00:05:52.480   10:02:47 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998
00:05:52.480   10:02:47 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT
00:05:52.480   10:02:47 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp
00:05:52.480   10:02:47 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable
00:05:52.480   10:02:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:05:52.480   10:02:47 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1740255
00:05:52.481   10:02:47 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0
00:05:52.481   10:02:47 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1740255
00:05:52.481   10:02:47 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1740255 ']'
00:05:52.481   10:02:47 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:52.481   10:02:47 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:52.481   10:02:47 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:52.481  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:52.481   10:02:47 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:52.481   10:02:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:05:52.481  [2024-11-20 10:02:47.314982] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:05:52.481  [2024-11-20 10:02:47.315123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1740255 ]
00:05:52.481  [2024-11-20 10:02:47.449812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:05:52.481  [2024-11-20 10:02:47.565874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:52.481  [2024-11-20 10:02:47.565877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:05:53.417   10:02:48 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:53.417   10:02:48 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0
00:05:53.417   10:02:48 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1740463
00:05:53.417   10:02:48 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock
00:05:53.417   10:02:48 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods
00:05:53.675  [
00:05:53.675    "bdev_malloc_delete",
00:05:53.675    "bdev_malloc_create",
00:05:53.675    "bdev_null_resize",
00:05:53.675    "bdev_null_delete",
00:05:53.675    "bdev_null_create",
00:05:53.675    "bdev_nvme_cuse_unregister",
00:05:53.675    "bdev_nvme_cuse_register",
00:05:53.675    "bdev_opal_new_user",
00:05:53.675    "bdev_opal_set_lock_state",
00:05:53.675    "bdev_opal_delete",
00:05:53.675    "bdev_opal_get_info",
00:05:53.675    "bdev_opal_create",
00:05:53.675    "bdev_nvme_opal_revert",
00:05:53.675    "bdev_nvme_opal_init",
00:05:53.675    "bdev_nvme_send_cmd",
00:05:53.675    "bdev_nvme_set_keys",
00:05:53.675    "bdev_nvme_get_path_iostat",
00:05:53.675    "bdev_nvme_get_mdns_discovery_info",
00:05:53.675    "bdev_nvme_stop_mdns_discovery",
00:05:53.675    "bdev_nvme_start_mdns_discovery",
00:05:53.675    "bdev_nvme_set_multipath_policy",
00:05:53.676    "bdev_nvme_set_preferred_path",
00:05:53.676    "bdev_nvme_get_io_paths",
00:05:53.676    "bdev_nvme_remove_error_injection",
00:05:53.676    "bdev_nvme_add_error_injection",
00:05:53.676    "bdev_nvme_get_discovery_info",
00:05:53.676    "bdev_nvme_stop_discovery",
00:05:53.676    "bdev_nvme_start_discovery",
00:05:53.676    "bdev_nvme_get_controller_health_info",
00:05:53.676    "bdev_nvme_disable_controller",
00:05:53.676    "bdev_nvme_enable_controller",
00:05:53.676    "bdev_nvme_reset_controller",
00:05:53.676    "bdev_nvme_get_transport_statistics",
00:05:53.676    "bdev_nvme_apply_firmware",
00:05:53.676    "bdev_nvme_detach_controller",
00:05:53.676    "bdev_nvme_get_controllers",
00:05:53.676    "bdev_nvme_attach_controller",
00:05:53.676    "bdev_nvme_set_hotplug",
00:05:53.676    "bdev_nvme_set_options",
00:05:53.676    "bdev_passthru_delete",
00:05:53.676    "bdev_passthru_create",
00:05:53.676    "bdev_lvol_set_parent_bdev",
00:05:53.676    "bdev_lvol_set_parent",
00:05:53.676    "bdev_lvol_check_shallow_copy",
00:05:53.676    "bdev_lvol_start_shallow_copy",
00:05:53.676    "bdev_lvol_grow_lvstore",
00:05:53.676    "bdev_lvol_get_lvols",
00:05:53.676    "bdev_lvol_get_lvstores",
00:05:53.676    "bdev_lvol_delete",
00:05:53.676    "bdev_lvol_set_read_only",
00:05:53.676    "bdev_lvol_resize",
00:05:53.676    "bdev_lvol_decouple_parent",
00:05:53.676    "bdev_lvol_inflate",
00:05:53.676    "bdev_lvol_rename",
00:05:53.676    "bdev_lvol_clone_bdev",
00:05:53.676    "bdev_lvol_clone",
00:05:53.676    "bdev_lvol_snapshot",
00:05:53.676    "bdev_lvol_create",
00:05:53.676    "bdev_lvol_delete_lvstore",
00:05:53.676    "bdev_lvol_rename_lvstore",
00:05:53.676    "bdev_lvol_create_lvstore",
00:05:53.676    "bdev_raid_set_options",
00:05:53.676    "bdev_raid_remove_base_bdev",
00:05:53.676    "bdev_raid_add_base_bdev",
00:05:53.676    "bdev_raid_delete",
00:05:53.676    "bdev_raid_create",
00:05:53.676    "bdev_raid_get_bdevs",
00:05:53.676    "bdev_error_inject_error",
00:05:53.676    "bdev_error_delete",
00:05:53.676    "bdev_error_create",
00:05:53.676    "bdev_split_delete",
00:05:53.676    "bdev_split_create",
00:05:53.676    "bdev_delay_delete",
00:05:53.676    "bdev_delay_create",
00:05:53.676    "bdev_delay_update_latency",
00:05:53.676    "bdev_zone_block_delete",
00:05:53.676    "bdev_zone_block_create",
00:05:53.676    "blobfs_create",
00:05:53.676    "blobfs_detect",
00:05:53.676    "blobfs_set_cache_size",
00:05:53.676    "bdev_crypto_delete",
00:05:53.676    "bdev_crypto_create",
00:05:53.676    "bdev_aio_delete",
00:05:53.676    "bdev_aio_rescan",
00:05:53.676    "bdev_aio_create",
00:05:53.676    "bdev_ftl_set_property",
00:05:53.676    "bdev_ftl_get_properties",
00:05:53.676    "bdev_ftl_get_stats",
00:05:53.676    "bdev_ftl_unmap",
00:05:53.676    "bdev_ftl_unload",
00:05:53.676    "bdev_ftl_delete",
00:05:53.676    "bdev_ftl_load",
00:05:53.676    "bdev_ftl_create",
00:05:53.676    "bdev_virtio_attach_controller",
00:05:53.676    "bdev_virtio_scsi_get_devices",
00:05:53.676    "bdev_virtio_detach_controller",
00:05:53.676    "bdev_virtio_blk_set_hotplug",
00:05:53.676    "bdev_iscsi_delete",
00:05:53.676    "bdev_iscsi_create",
00:05:53.676    "bdev_iscsi_set_options",
00:05:53.676    "accel_error_inject_error",
00:05:53.676    "ioat_scan_accel_module",
00:05:53.676    "dsa_scan_accel_module",
00:05:53.676    "iaa_scan_accel_module",
00:05:53.676    "dpdk_cryptodev_get_driver",
00:05:53.676    "dpdk_cryptodev_set_driver",
00:05:53.676    "dpdk_cryptodev_scan_accel_module",
00:05:53.676    "vfu_virtio_create_fs_endpoint",
00:05:53.676    "vfu_virtio_create_scsi_endpoint",
00:05:53.676    "vfu_virtio_scsi_remove_target",
00:05:53.676    "vfu_virtio_scsi_add_target",
00:05:53.676    "vfu_virtio_create_blk_endpoint",
00:05:53.676    "vfu_virtio_delete_endpoint",
00:05:53.676    "keyring_file_remove_key",
00:05:53.676    "keyring_file_add_key",
00:05:53.676    "keyring_linux_set_options",
00:05:53.676    "fsdev_aio_delete",
00:05:53.676    "fsdev_aio_create",
00:05:53.676    "iscsi_get_histogram",
00:05:53.676    "iscsi_enable_histogram",
00:05:53.676    "iscsi_set_options",
00:05:53.676    "iscsi_get_auth_groups",
00:05:53.676    "iscsi_auth_group_remove_secret",
00:05:53.676    "iscsi_auth_group_add_secret",
00:05:53.676    "iscsi_delete_auth_group",
00:05:53.676    "iscsi_create_auth_group",
00:05:53.676    "iscsi_set_discovery_auth",
00:05:53.676    "iscsi_get_options",
00:05:53.676    "iscsi_target_node_request_logout",
00:05:53.676    "iscsi_target_node_set_redirect",
00:05:53.676    "iscsi_target_node_set_auth",
00:05:53.676    "iscsi_target_node_add_lun",
00:05:53.676    "iscsi_get_stats",
00:05:53.676    "iscsi_get_connections",
00:05:53.676    "iscsi_portal_group_set_auth",
00:05:53.676    "iscsi_start_portal_group",
00:05:53.676    "iscsi_delete_portal_group",
00:05:53.676    "iscsi_create_portal_group",
00:05:53.676    "iscsi_get_portal_groups",
00:05:53.676    "iscsi_delete_target_node",
00:05:53.676    "iscsi_target_node_remove_pg_ig_maps",
00:05:53.676    "iscsi_target_node_add_pg_ig_maps",
00:05:53.676    "iscsi_create_target_node",
00:05:53.676    "iscsi_get_target_nodes",
00:05:53.676    "iscsi_delete_initiator_group",
00:05:53.676    "iscsi_initiator_group_remove_initiators",
00:05:53.676    "iscsi_initiator_group_add_initiators",
00:05:53.676    "iscsi_create_initiator_group",
00:05:53.676    "iscsi_get_initiator_groups",
00:05:53.676    "nvmf_set_crdt",
00:05:53.676    "nvmf_set_config",
00:05:53.676    "nvmf_set_max_subsystems",
00:05:53.676    "nvmf_stop_mdns_prr",
00:05:53.676    "nvmf_publish_mdns_prr",
00:05:53.676    "nvmf_subsystem_get_listeners",
00:05:53.676    "nvmf_subsystem_get_qpairs",
00:05:53.676    "nvmf_subsystem_get_controllers",
00:05:53.676    "nvmf_get_stats",
00:05:53.676    "nvmf_get_transports",
00:05:53.676    "nvmf_create_transport",
00:05:53.676    "nvmf_get_targets",
00:05:53.676    "nvmf_delete_target",
00:05:53.676    "nvmf_create_target",
00:05:53.676    "nvmf_subsystem_allow_any_host",
00:05:53.676    "nvmf_subsystem_set_keys",
00:05:53.676    "nvmf_subsystem_remove_host",
00:05:53.676    "nvmf_subsystem_add_host",
00:05:53.676    "nvmf_ns_remove_host",
00:05:53.676    "nvmf_ns_add_host",
00:05:53.676    "nvmf_subsystem_remove_ns",
00:05:53.676    "nvmf_subsystem_set_ns_ana_group",
00:05:53.676    "nvmf_subsystem_add_ns",
00:05:53.676    "nvmf_subsystem_listener_set_ana_state",
00:05:53.676    "nvmf_discovery_get_referrals",
00:05:53.676    "nvmf_discovery_remove_referral",
00:05:53.676    "nvmf_discovery_add_referral",
00:05:53.676    "nvmf_subsystem_remove_listener",
00:05:53.676    "nvmf_subsystem_add_listener",
00:05:53.676    "nvmf_delete_subsystem",
00:05:53.676    "nvmf_create_subsystem",
00:05:53.676    "nvmf_get_subsystems",
00:05:53.676    "env_dpdk_get_mem_stats",
00:05:53.676    "nbd_get_disks",
00:05:53.676    "nbd_stop_disk",
00:05:53.676    "nbd_start_disk",
00:05:53.676    "ublk_recover_disk",
00:05:53.676    "ublk_get_disks",
00:05:53.676    "ublk_stop_disk",
00:05:53.676    "ublk_start_disk",
00:05:53.676    "ublk_destroy_target",
00:05:53.676    "ublk_create_target",
00:05:53.676    "virtio_blk_create_transport",
00:05:53.676    "virtio_blk_get_transports",
00:05:53.676    "vhost_controller_set_coalescing",
00:05:53.676    "vhost_get_controllers",
00:05:53.676    "vhost_delete_controller",
00:05:53.676    "vhost_create_blk_controller",
00:05:53.676    "vhost_scsi_controller_remove_target",
00:05:53.676    "vhost_scsi_controller_add_target",
00:05:53.676    "vhost_start_scsi_controller",
00:05:53.676    "vhost_create_scsi_controller",
00:05:53.676    "thread_set_cpumask",
00:05:53.676    "scheduler_set_options",
00:05:53.676    "framework_get_governor",
00:05:53.676    "framework_get_scheduler",
00:05:53.676    "framework_set_scheduler",
00:05:53.676    "framework_get_reactors",
00:05:53.676    "thread_get_io_channels",
00:05:53.676    "thread_get_pollers",
00:05:53.676    "thread_get_stats",
00:05:53.676    "framework_monitor_context_switch",
00:05:53.676    "spdk_kill_instance",
00:05:53.676    "log_enable_timestamps",
00:05:53.677    "log_get_flags",
00:05:53.677    "log_clear_flag",
00:05:53.677    "log_set_flag",
00:05:53.677    "log_get_level",
00:05:53.677    "log_set_level",
00:05:53.677    "log_get_print_level",
00:05:53.677    "log_set_print_level",
00:05:53.677    "framework_enable_cpumask_locks",
00:05:53.677    "framework_disable_cpumask_locks",
00:05:53.677    "framework_wait_init",
00:05:53.677    "framework_start_init",
00:05:53.677    "scsi_get_devices",
00:05:53.677    "bdev_get_histogram",
00:05:53.677    "bdev_enable_histogram",
00:05:53.677    "bdev_set_qos_limit",
00:05:53.677    "bdev_set_qd_sampling_period",
00:05:53.677    "bdev_get_bdevs",
00:05:53.677    "bdev_reset_iostat",
00:05:53.677    "bdev_get_iostat",
00:05:53.677    "bdev_examine",
00:05:53.677    "bdev_wait_for_examine",
00:05:53.677    "bdev_set_options",
00:05:53.677    "accel_get_stats",
00:05:53.677    "accel_set_options",
00:05:53.677    "accel_set_driver",
00:05:53.677    "accel_crypto_key_destroy",
00:05:53.677    "accel_crypto_keys_get",
00:05:53.677    "accel_crypto_key_create",
00:05:53.677    "accel_assign_opc",
00:05:53.677    "accel_get_module_info",
00:05:53.677    "accel_get_opc_assignments",
00:05:53.677    "vmd_rescan",
00:05:53.677    "vmd_remove_device",
00:05:53.677    "vmd_enable",
00:05:53.677    "sock_get_default_impl",
00:05:53.677    "sock_set_default_impl",
00:05:53.677    "sock_impl_set_options",
00:05:53.677    "sock_impl_get_options",
00:05:53.677    "iobuf_get_stats",
00:05:53.677    "iobuf_set_options",
00:05:53.677    "keyring_get_keys",
00:05:53.677    "vfu_tgt_set_base_path",
00:05:53.677    "framework_get_pci_devices",
00:05:53.677    "framework_get_config",
00:05:53.677    "framework_get_subsystems",
00:05:53.677    "fsdev_set_opts",
00:05:53.677    "fsdev_get_opts",
00:05:53.677    "trace_get_info",
00:05:53.677    "trace_get_tpoint_group_mask",
00:05:53.677    "trace_disable_tpoint_group",
00:05:53.677    "trace_enable_tpoint_group",
00:05:53.677    "trace_clear_tpoint_mask",
00:05:53.677    "trace_set_tpoint_mask",
00:05:53.677    "notify_get_notifications",
00:05:53.677    "notify_get_types",
00:05:53.677    "spdk_get_version",
00:05:53.677    "rpc_get_methods"
00:05:53.677  ]
00:05:53.677   10:02:48 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp
00:05:53.677   10:02:48 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable
00:05:53.677   10:02:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:05:53.677   10:02:48 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT
00:05:53.677   10:02:48 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1740255
00:05:53.677   10:02:48 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1740255 ']'
00:05:53.677   10:02:48 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1740255
00:05:53.677    10:02:48 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname
00:05:53.677   10:02:48 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:53.677    10:02:48 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1740255
00:05:53.677   10:02:48 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:53.677   10:02:48 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:53.677   10:02:48 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1740255'
00:05:53.677  killing process with pid 1740255
00:05:53.677   10:02:48 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1740255
00:05:53.677   10:02:48 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1740255
00:05:56.207  
00:05:56.207  real	0m3.788s
00:05:56.207  user	0m6.955s
00:05:56.207  sys	0m0.658s
00:05:56.207   10:02:50 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:56.207   10:02:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:05:56.207  ************************************
00:05:56.207  END TEST spdkcli_tcp
00:05:56.207  ************************************
00:05:56.207   10:02:50  -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:05:56.207   10:02:50  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:56.207   10:02:50  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:56.207   10:02:50  -- common/autotest_common.sh@10 -- # set +x
00:05:56.207  ************************************
00:05:56.207  START TEST dpdk_mem_utility
00:05:56.207  ************************************
00:05:56.207   10:02:50 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:05:56.207  * Looking for test storage...
00:05:56.207  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/dpdk_memory_utility
00:05:56.207    10:02:50 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:05:56.207     10:02:50 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version
00:05:56.207     10:02:50 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:05:56.207    10:02:51 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:05:56.207    10:02:51 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:56.207    10:02:51 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:56.207    10:02:51 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:56.207    10:02:51 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-:
00:05:56.207    10:02:51 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1
00:05:56.207    10:02:51 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-:
00:05:56.207    10:02:51 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2
00:05:56.207    10:02:51 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<'
00:05:56.207    10:02:51 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2
00:05:56.207    10:02:51 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1
00:05:56.207    10:02:51 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:56.207    10:02:51 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in
00:05:56.207    10:02:51 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1
00:05:56.207    10:02:51 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:56.207    10:02:51 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:56.207     10:02:51 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1
00:05:56.207     10:02:51 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1
00:05:56.207     10:02:51 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:56.207     10:02:51 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1
00:05:56.207    10:02:51 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1
00:05:56.207     10:02:51 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2
00:05:56.207     10:02:51 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2
00:05:56.207     10:02:51 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:56.207     10:02:51 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2
00:05:56.207    10:02:51 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2
00:05:56.207    10:02:51 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:56.207    10:02:51 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:56.207    10:02:51 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0
00:05:56.207    10:02:51 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:56.207    10:02:51 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:05:56.207  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:56.207  		--rc genhtml_branch_coverage=1
00:05:56.207  		--rc genhtml_function_coverage=1
00:05:56.207  		--rc genhtml_legend=1
00:05:56.207  		--rc geninfo_all_blocks=1
00:05:56.207  		--rc geninfo_unexecuted_blocks=1
00:05:56.207  		
00:05:56.207  		'
00:05:56.207    10:02:51 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:05:56.207  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:56.207  		--rc genhtml_branch_coverage=1
00:05:56.207  		--rc genhtml_function_coverage=1
00:05:56.207  		--rc genhtml_legend=1
00:05:56.207  		--rc geninfo_all_blocks=1
00:05:56.207  		--rc geninfo_unexecuted_blocks=1
00:05:56.207  		
00:05:56.207  		'
00:05:56.207    10:02:51 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:05:56.207  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:56.207  		--rc genhtml_branch_coverage=1
00:05:56.207  		--rc genhtml_function_coverage=1
00:05:56.207  		--rc genhtml_legend=1
00:05:56.207  		--rc geninfo_all_blocks=1
00:05:56.207  		--rc geninfo_unexecuted_blocks=1
00:05:56.207  		
00:05:56.207  		'
00:05:56.207    10:02:51 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:05:56.207  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:56.207  		--rc genhtml_branch_coverage=1
00:05:56.207  		--rc genhtml_function_coverage=1
00:05:56.207  		--rc genhtml_legend=1
00:05:56.208  		--rc geninfo_all_blocks=1
00:05:56.208  		--rc geninfo_unexecuted_blocks=1
00:05:56.208  		
00:05:56.208  		'
00:05:56.208   10:02:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/dpdk_mem_info.py
00:05:56.208   10:02:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1740845
00:05:56.208   10:02:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:05:56.208   10:02:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1740845
00:05:56.208   10:02:51 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1740845 ']'
00:05:56.208   10:02:51 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:56.208   10:02:51 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:56.208   10:02:51 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:56.208  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:56.208   10:02:51 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:56.208   10:02:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:05:56.208  [2024-11-20 10:02:51.134103] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:05:56.208  [2024-11-20 10:02:51.134253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1740845 ]
00:05:56.208  [2024-11-20 10:02:51.268296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:56.467  [2024-11-20 10:02:51.387726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:57.402   10:02:52 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:57.402   10:02:52 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0
00:05:57.402   10:02:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT
00:05:57.402   10:02:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats
00:05:57.402   10:02:52 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:57.402   10:02:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:05:57.402  {
00:05:57.402  "filename": "/tmp/spdk_mem_dump.txt"
00:05:57.402  }
00:05:57.402   10:02:52 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:57.402   10:02:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/dpdk_mem_info.py
00:05:57.402  DPDK memory size 816.000000 MiB in 1 heap(s)
00:05:57.402  1 heaps totaling size 816.000000 MiB
00:05:57.402    size:  816.000000 MiB heap id: 0
00:05:57.402  end heaps----------
00:05:57.402  9 mempools totaling size 595.772034 MiB
00:05:57.402    size:  212.674988 MiB name: PDU_immediate_data_Pool
00:05:57.402    size:  158.602051 MiB name: PDU_data_out_Pool
00:05:57.402    size:   92.545471 MiB name: bdev_io_1740845
00:05:57.402    size:   50.003479 MiB name: msgpool_1740845
00:05:57.402    size:   36.509338 MiB name: fsdev_io_1740845
00:05:57.402    size:   21.763794 MiB name: PDU_Pool
00:05:57.402    size:   19.513306 MiB name: SCSI_TASK_Pool
00:05:57.402    size:    4.133484 MiB name: evtpool_1740845
00:05:57.402    size:    0.026123 MiB name: Session_Pool
00:05:57.402  end mempools-------
00:05:57.402  6 memzones totaling size 4.142822 MiB
00:05:57.402    size:    1.000366 MiB name: RG_ring_0_1740845
00:05:57.402    size:    1.000366 MiB name: RG_ring_1_1740845
00:05:57.402    size:    1.000366 MiB name: RG_ring_4_1740845
00:05:57.402    size:    1.000366 MiB name: RG_ring_5_1740845
00:05:57.402    size:    0.125366 MiB name: RG_ring_2_1740845
00:05:57.402    size:    0.015991 MiB name: RG_ring_3_1740845
00:05:57.402  end memzones-------
00:05:57.402   10:02:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0
00:05:57.402  heap id: 0 total size: 816.000000 MiB number of busy elements: 44 number of free elements: 19
00:05:57.402    list of free elements. size: 16.857605 MiB
00:05:57.402      element at address: 0x200006400000 with size:    1.995972 MiB
00:05:57.402      element at address: 0x20000a600000 with size:    1.995972 MiB
00:05:57.402      element at address: 0x200003e00000 with size:    1.991028 MiB
00:05:57.402      element at address: 0x200018d00040 with size:    0.999939 MiB
00:05:57.402      element at address: 0x200019100040 with size:    0.999939 MiB
00:05:57.402      element at address: 0x200019200000 with size:    0.999329 MiB
00:05:57.402      element at address: 0x200000400000 with size:    0.998108 MiB
00:05:57.402      element at address: 0x200031e00000 with size:    0.994324 MiB
00:05:57.402      element at address: 0x200018a00000 with size:    0.959900 MiB
00:05:57.402      element at address: 0x200019500040 with size:    0.937256 MiB
00:05:57.402      element at address: 0x200000200000 with size:    0.716980 MiB
00:05:57.402      element at address: 0x20001ac00000 with size:    0.583191 MiB
00:05:57.402      element at address: 0x200000c00000 with size:    0.495300 MiB
00:05:57.402      element at address: 0x200018e00000 with size:    0.491150 MiB
00:05:57.402      element at address: 0x200019600000 with size:    0.485657 MiB
00:05:57.403      element at address: 0x200012c00000 with size:    0.446167 MiB
00:05:57.403      element at address: 0x200028000000 with size:    0.411072 MiB
00:05:57.403      element at address: 0x200000800000 with size:    0.355286 MiB
00:05:57.403      element at address: 0x20000a5ff040 with size:    0.001038 MiB
00:05:57.403    list of standard malloc elements. size: 199.221497 MiB
00:05:57.403      element at address: 0x20000a7fef80 with size:  132.000183 MiB
00:05:57.403      element at address: 0x2000065fef80 with size:   64.000183 MiB
00:05:57.403      element at address: 0x200018bfff80 with size:    1.000183 MiB
00:05:57.403      element at address: 0x200018ffff80 with size:    1.000183 MiB
00:05:57.403      element at address: 0x2000193fff80 with size:    1.000183 MiB
00:05:57.403      element at address: 0x2000003d9e80 with size:    0.140808 MiB
00:05:57.403      element at address: 0x2000195eff40 with size:    0.062683 MiB
00:05:57.403      element at address: 0x2000003fdf40 with size:    0.007996 MiB
00:05:57.403      element at address: 0x200012bff040 with size:    0.000427 MiB
00:05:57.403      element at address: 0x200012bffa00 with size:    0.000366 MiB
00:05:57.403      element at address: 0x2000002d7b00 with size:    0.000244 MiB
00:05:57.403      element at address: 0x2000003d9d80 with size:    0.000244 MiB
00:05:57.403      element at address: 0x2000004ff840 with size:    0.000244 MiB
00:05:57.403      element at address: 0x2000004ff940 with size:    0.000244 MiB
00:05:57.403      element at address: 0x2000004ffa40 with size:    0.000244 MiB
00:05:57.403      element at address: 0x2000004ffcc0 with size:    0.000244 MiB
00:05:57.403      element at address: 0x2000004ffdc0 with size:    0.000244 MiB
00:05:57.403      element at address: 0x20000087f3c0 with size:    0.000244 MiB
00:05:57.403      element at address: 0x20000087f4c0 with size:    0.000244 MiB
00:05:57.403      element at address: 0x2000008ff800 with size:    0.000244 MiB
00:05:57.403      element at address: 0x2000008ffa80 with size:    0.000244 MiB
00:05:57.403      element at address: 0x200000cfef00 with size:    0.000244 MiB
00:05:57.403      element at address: 0x200000cff000 with size:    0.000244 MiB
00:05:57.403      element at address: 0x20000a5ff480 with size:    0.000244 MiB
00:05:57.403      element at address: 0x20000a5ff580 with size:    0.000244 MiB
00:05:57.403      element at address: 0x20000a5ff680 with size:    0.000244 MiB
00:05:57.403      element at address: 0x20000a5ff780 with size:    0.000244 MiB
00:05:57.403      element at address: 0x20000a5ff880 with size:    0.000244 MiB
00:05:57.403      element at address: 0x20000a5ff980 with size:    0.000244 MiB
00:05:57.403      element at address: 0x20000a5ffc00 with size:    0.000244 MiB
00:05:57.403      element at address: 0x20000a5ffd00 with size:    0.000244 MiB
00:05:57.403      element at address: 0x20000a5ffe00 with size:    0.000244 MiB
00:05:57.403      element at address: 0x20000a5fff00 with size:    0.000244 MiB
00:05:57.403      element at address: 0x200012bff200 with size:    0.000244 MiB
00:05:57.403      element at address: 0x200012bff300 with size:    0.000244 MiB
00:05:57.403      element at address: 0x200012bff400 with size:    0.000244 MiB
00:05:57.403      element at address: 0x200012bff500 with size:    0.000244 MiB
00:05:57.403      element at address: 0x200012bff600 with size:    0.000244 MiB
00:05:57.403      element at address: 0x200012bff700 with size:    0.000244 MiB
00:05:57.403      element at address: 0x200012bff800 with size:    0.000244 MiB
00:05:57.403      element at address: 0x200012bff900 with size:    0.000244 MiB
00:05:57.403      element at address: 0x200012bffb80 with size:    0.000244 MiB
00:05:57.403      element at address: 0x200012bffc80 with size:    0.000244 MiB
00:05:57.403      element at address: 0x200012bfff00 with size:    0.000244 MiB
00:05:57.403    list of memzone associated elements. size: 599.920898 MiB
00:05:57.403      element at address: 0x20001ac954c0 with size:  211.416809 MiB
00:05:57.403        associated memzone info: size:  211.416626 MiB name: MP_PDU_immediate_data_Pool_0
00:05:57.403      element at address: 0x20002806ff80 with size:  157.562622 MiB
00:05:57.403        associated memzone info: size:  157.562439 MiB name: MP_PDU_data_out_Pool_0
00:05:57.403      element at address: 0x200012df4740 with size:   92.045105 MiB
00:05:57.403        associated memzone info: size:   92.044922 MiB name: MP_bdev_io_1740845_0
00:05:57.403      element at address: 0x200000dff340 with size:   48.003113 MiB
00:05:57.403        associated memzone info: size:   48.002930 MiB name: MP_msgpool_1740845_0
00:05:57.403      element at address: 0x200003ffdb40 with size:   36.008972 MiB
00:05:57.403        associated memzone info: size:   36.008789 MiB name: MP_fsdev_io_1740845_0
00:05:57.403      element at address: 0x2000197be900 with size:   20.255615 MiB
00:05:57.403        associated memzone info: size:   20.255432 MiB name: MP_PDU_Pool_0
00:05:57.403      element at address: 0x200031ffeb00 with size:   18.005127 MiB
00:05:57.403        associated memzone info: size:   18.004944 MiB name: MP_SCSI_TASK_Pool_0
00:05:57.403      element at address: 0x2000004ffec0 with size:    3.000305 MiB
00:05:57.403        associated memzone info: size:    3.000122 MiB name: MP_evtpool_1740845_0
00:05:57.403      element at address: 0x2000009ffdc0 with size:    2.000549 MiB
00:05:57.403        associated memzone info: size:    2.000366 MiB name: RG_MP_msgpool_1740845
00:05:57.403      element at address: 0x2000002d7c00 with size:    1.008179 MiB
00:05:57.403        associated memzone info: size:    1.007996 MiB name: MP_evtpool_1740845
00:05:57.403      element at address: 0x200018efde00 with size:    1.008179 MiB
00:05:57.403        associated memzone info: size:    1.007996 MiB name: MP_PDU_Pool
00:05:57.403      element at address: 0x2000196bc780 with size:    1.008179 MiB
00:05:57.403        associated memzone info: size:    1.007996 MiB name: MP_PDU_immediate_data_Pool
00:05:57.403      element at address: 0x200018afde00 with size:    1.008179 MiB
00:05:57.403        associated memzone info: size:    1.007996 MiB name: MP_PDU_data_out_Pool
00:05:57.403      element at address: 0x200012cf25c0 with size:    1.008179 MiB
00:05:57.403        associated memzone info: size:    1.007996 MiB name: MP_SCSI_TASK_Pool
00:05:57.403      element at address: 0x200000cff100 with size:    1.000549 MiB
00:05:57.403        associated memzone info: size:    1.000366 MiB name: RG_ring_0_1740845
00:05:57.403      element at address: 0x2000008ffb80 with size:    1.000549 MiB
00:05:57.403        associated memzone info: size:    1.000366 MiB name: RG_ring_1_1740845
00:05:57.403      element at address: 0x2000192ffd40 with size:    1.000549 MiB
00:05:57.403        associated memzone info: size:    1.000366 MiB name: RG_ring_4_1740845
00:05:57.403      element at address: 0x200031efe8c0 with size:    1.000549 MiB
00:05:57.403        associated memzone info: size:    1.000366 MiB name: RG_ring_5_1740845
00:05:57.403      element at address: 0x20000087f5c0 with size:    0.500549 MiB
00:05:57.403        associated memzone info: size:    0.500366 MiB name: RG_MP_fsdev_io_1740845
00:05:57.403      element at address: 0x200000c7ecc0 with size:    0.500549 MiB
00:05:57.403        associated memzone info: size:    0.500366 MiB name: RG_MP_bdev_io_1740845
00:05:57.403      element at address: 0x200018e7dbc0 with size:    0.500549 MiB
00:05:57.403        associated memzone info: size:    0.500366 MiB name: RG_MP_PDU_Pool
00:05:57.403      element at address: 0x200012c72380 with size:    0.500549 MiB
00:05:57.403        associated memzone info: size:    0.500366 MiB name: RG_MP_SCSI_TASK_Pool
00:05:57.403      element at address: 0x20001967c540 with size:    0.250549 MiB
00:05:57.403        associated memzone info: size:    0.250366 MiB name: RG_MP_PDU_immediate_data_Pool
00:05:57.403      element at address: 0x2000002b78c0 with size:    0.125549 MiB
00:05:57.403        associated memzone info: size:    0.125366 MiB name: RG_MP_evtpool_1740845
00:05:57.403      element at address: 0x20000085f180 with size:    0.125549 MiB
00:05:57.403        associated memzone info: size:    0.125366 MiB name: RG_ring_2_1740845
00:05:57.403      element at address: 0x200018af5bc0 with size:    0.031799 MiB
00:05:57.403        associated memzone info: size:    0.031616 MiB name: RG_MP_PDU_data_out_Pool
00:05:57.403      element at address: 0x2000280693c0 with size:    0.023804 MiB
00:05:57.403        associated memzone info: size:    0.023621 MiB name: MP_Session_Pool_0
00:05:57.403      element at address: 0x20000085af40 with size:    0.016174 MiB
00:05:57.403        associated memzone info: size:    0.015991 MiB name: RG_ring_3_1740845
00:05:57.403      element at address: 0x20002806f540 with size:    0.002502 MiB
00:05:57.403        associated memzone info: size:    0.002319 MiB name: RG_MP_Session_Pool
00:05:57.403      element at address: 0x2000004ffb40 with size:    0.000366 MiB
00:05:57.403        associated memzone info: size:    0.000183 MiB name: MP_msgpool_1740845
00:05:57.403      element at address: 0x2000008ff900 with size:    0.000366 MiB
00:05:57.403        associated memzone info: size:    0.000183 MiB name: MP_fsdev_io_1740845
00:05:57.403      element at address: 0x200012bffd80 with size:    0.000366 MiB
00:05:57.403        associated memzone info: size:    0.000183 MiB name: MP_bdev_io_1740845
00:05:57.403      element at address: 0x20000a5ffa80 with size:    0.000366 MiB
00:05:57.403        associated memzone info: size:    0.000183 MiB name: MP_Session_Pool
00:05:57.403   10:02:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT
00:05:57.403   10:02:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1740845
00:05:57.403   10:02:52 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1740845 ']'
00:05:57.403   10:02:52 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1740845
00:05:57.403    10:02:52 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname
00:05:57.403   10:02:52 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:57.403    10:02:52 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1740845
00:05:57.403   10:02:52 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:57.403   10:02:52 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:57.403   10:02:52 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1740845'
00:05:57.403  killing process with pid 1740845
00:05:57.403   10:02:52 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1740845
00:05:57.403   10:02:52 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1740845
00:05:59.305  
00:05:59.305  real	0m3.452s
00:05:59.305  user	0m3.508s
00:05:59.305  sys	0m0.601s
00:05:59.305   10:02:54 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:59.305   10:02:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:05:59.305  ************************************
00:05:59.305  END TEST dpdk_mem_utility
00:05:59.305  ************************************
00:05:59.305   10:02:54  -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/event.sh
00:05:59.305   10:02:54  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:59.305   10:02:54  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:59.305   10:02:54  -- common/autotest_common.sh@10 -- # set +x
00:05:59.305  ************************************
00:05:59.305  START TEST event
00:05:59.305  ************************************
00:05:59.305   10:02:54 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/event.sh
00:05:59.564  * Looking for test storage...
00:05:59.564  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event
00:05:59.564    10:02:54 event -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:05:59.564     10:02:54 event -- common/autotest_common.sh@1693 -- # lcov --version
00:05:59.564     10:02:54 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:05:59.564    10:02:54 event -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:05:59.564    10:02:54 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:59.564    10:02:54 event -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:59.564    10:02:54 event -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:59.564    10:02:54 event -- scripts/common.sh@336 -- # IFS=.-:
00:05:59.564    10:02:54 event -- scripts/common.sh@336 -- # read -ra ver1
00:05:59.564    10:02:54 event -- scripts/common.sh@337 -- # IFS=.-:
00:05:59.564    10:02:54 event -- scripts/common.sh@337 -- # read -ra ver2
00:05:59.564    10:02:54 event -- scripts/common.sh@338 -- # local 'op=<'
00:05:59.564    10:02:54 event -- scripts/common.sh@340 -- # ver1_l=2
00:05:59.564    10:02:54 event -- scripts/common.sh@341 -- # ver2_l=1
00:05:59.564    10:02:54 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:59.564    10:02:54 event -- scripts/common.sh@344 -- # case "$op" in
00:05:59.564    10:02:54 event -- scripts/common.sh@345 -- # : 1
00:05:59.564    10:02:54 event -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:59.564    10:02:54 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:59.564     10:02:54 event -- scripts/common.sh@365 -- # decimal 1
00:05:59.564     10:02:54 event -- scripts/common.sh@353 -- # local d=1
00:05:59.564     10:02:54 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:59.564     10:02:54 event -- scripts/common.sh@355 -- # echo 1
00:05:59.564    10:02:54 event -- scripts/common.sh@365 -- # ver1[v]=1
00:05:59.564     10:02:54 event -- scripts/common.sh@366 -- # decimal 2
00:05:59.564     10:02:54 event -- scripts/common.sh@353 -- # local d=2
00:05:59.564     10:02:54 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:59.564     10:02:54 event -- scripts/common.sh@355 -- # echo 2
00:05:59.564    10:02:54 event -- scripts/common.sh@366 -- # ver2[v]=2
00:05:59.564    10:02:54 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:59.564    10:02:54 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:59.564    10:02:54 event -- scripts/common.sh@368 -- # return 0
00:05:59.564    10:02:54 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:59.564    10:02:54 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:05:59.564  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:59.564  		--rc genhtml_branch_coverage=1
00:05:59.564  		--rc genhtml_function_coverage=1
00:05:59.564  		--rc genhtml_legend=1
00:05:59.564  		--rc geninfo_all_blocks=1
00:05:59.564  		--rc geninfo_unexecuted_blocks=1
00:05:59.564  		
00:05:59.564  		'
00:05:59.564    10:02:54 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:05:59.564  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:59.564  		--rc genhtml_branch_coverage=1
00:05:59.564  		--rc genhtml_function_coverage=1
00:05:59.564  		--rc genhtml_legend=1
00:05:59.564  		--rc geninfo_all_blocks=1
00:05:59.564  		--rc geninfo_unexecuted_blocks=1
00:05:59.564  		
00:05:59.564  		'
00:05:59.564    10:02:54 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:05:59.564  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:59.564  		--rc genhtml_branch_coverage=1
00:05:59.564  		--rc genhtml_function_coverage=1
00:05:59.564  		--rc genhtml_legend=1
00:05:59.564  		--rc geninfo_all_blocks=1
00:05:59.564  		--rc geninfo_unexecuted_blocks=1
00:05:59.564  		
00:05:59.564  		'
00:05:59.564    10:02:54 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:05:59.564  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:59.564  		--rc genhtml_branch_coverage=1
00:05:59.564  		--rc genhtml_function_coverage=1
00:05:59.564  		--rc genhtml_legend=1
00:05:59.564  		--rc geninfo_all_blocks=1
00:05:59.564  		--rc geninfo_unexecuted_blocks=1
00:05:59.564  		
00:05:59.564  		'
00:05:59.564   10:02:54 event -- event/event.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/bdev/nbd_common.sh
00:05:59.564    10:02:54 event -- bdev/nbd_common.sh@6 -- # set -e
00:05:59.564   10:02:54 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:05:59.564   10:02:54 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']'
00:05:59.564   10:02:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:59.564   10:02:54 event -- common/autotest_common.sh@10 -- # set +x
00:05:59.564  ************************************
00:05:59.564  START TEST event_perf
00:05:59.564  ************************************
00:05:59.564   10:02:54 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:05:59.564  Running I/O for 1 seconds...[2024-11-20 10:02:54.603718] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:05:59.564  [2024-11-20 10:02:54.603840] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1741311 ]
00:05:59.823  [2024-11-20 10:02:54.737888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:05:59.823  [2024-11-20 10:02:54.857752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:05:59.823  [2024-11-20 10:02:54.857792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:05:59.823  [2024-11-20 10:02:54.857842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:05:59.823  [2024-11-20 10:02:54.857833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:01.197  Running I/O for 1 seconds...
00:06:01.197  lcore  0:   220854
00:06:01.197  lcore  1:   220852
00:06:01.197  lcore  2:   220852
00:06:01.197  lcore  3:   220853
00:06:01.197  done.
00:06:01.197  
00:06:01.197  real	0m1.513s
00:06:01.197  user	0m4.347s
00:06:01.197  sys	0m0.152s
00:06:01.197   10:02:56 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:01.197   10:02:56 event.event_perf -- common/autotest_common.sh@10 -- # set +x
00:06:01.197  ************************************
00:06:01.197  END TEST event_perf
00:06:01.197  ************************************
00:06:01.197   10:02:56 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/reactor/reactor -t 1
00:06:01.197   10:02:56 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:06:01.197   10:02:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:01.197   10:02:56 event -- common/autotest_common.sh@10 -- # set +x
00:06:01.197  ************************************
00:06:01.197  START TEST event_reactor
00:06:01.197  ************************************
00:06:01.197   10:02:56 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/reactor/reactor -t 1
00:06:01.197  [2024-11-20 10:02:56.169996] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:06:01.197  [2024-11-20 10:02:56.170103] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1741473 ]
00:06:01.197  [2024-11-20 10:02:56.301127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:01.455  [2024-11-20 10:02:56.424071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:02.829  test_start
00:06:02.829  oneshot
00:06:02.829  tick 100
00:06:02.829  tick 100
00:06:02.829  tick 250
00:06:02.829  tick 100
00:06:02.829  tick 100
00:06:02.829  tick 100
00:06:02.829  tick 250
00:06:02.829  tick 500
00:06:02.829  tick 100
00:06:02.829  tick 100
00:06:02.829  tick 250
00:06:02.829  tick 100
00:06:02.829  tick 100
00:06:02.829  test_end
00:06:02.829  
00:06:02.829  real	0m1.513s
00:06:02.829  user	0m1.367s
00:06:02.829  sys	0m0.139s
00:06:02.829   10:02:57 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:02.829   10:02:57 event.event_reactor -- common/autotest_common.sh@10 -- # set +x
00:06:02.829  ************************************
00:06:02.829  END TEST event_reactor
00:06:02.829  ************************************
00:06:02.829   10:02:57 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1
00:06:02.829   10:02:57 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:06:02.829   10:02:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:02.829   10:02:57 event -- common/autotest_common.sh@10 -- # set +x
00:06:02.829  ************************************
00:06:02.829  START TEST event_reactor_perf
00:06:02.829  ************************************
00:06:02.829   10:02:57 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1
00:06:02.829  [2024-11-20 10:02:57.734053] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:06:02.829  [2024-11-20 10:02:57.734161] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1741752 ]
00:06:02.829  [2024-11-20 10:02:57.864900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:03.087  [2024-11-20 10:02:57.979381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:04.461  test_start
00:06:04.462  test_end
00:06:04.462  Performance:   333159 events per second
00:06:04.462  
00:06:04.462  real	0m1.494s
00:06:04.462  user	0m1.357s
00:06:04.462  sys	0m0.130s
00:06:04.462   10:02:59 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:04.462   10:02:59 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x
00:06:04.462  ************************************
00:06:04.462  END TEST event_reactor_perf
00:06:04.462  ************************************
00:06:04.462    10:02:59 event -- event/event.sh@49 -- # uname -s
00:06:04.462   10:02:59 event -- event/event.sh@49 -- # '[' Linux = Linux ']'
00:06:04.462   10:02:59 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler.sh
00:06:04.462   10:02:59 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:04.462   10:02:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:04.462   10:02:59 event -- common/autotest_common.sh@10 -- # set +x
00:06:04.462  ************************************
00:06:04.462  START TEST event_scheduler
00:06:04.462  ************************************
00:06:04.462   10:02:59 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler.sh
00:06:04.462  * Looking for test storage...
00:06:04.462  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler
00:06:04.462    10:02:59 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:06:04.462     10:02:59 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version
00:06:04.462     10:02:59 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:06:04.462    10:02:59 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:06:04.462    10:02:59 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:04.462    10:02:59 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:04.462    10:02:59 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:04.462    10:02:59 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-:
00:06:04.462    10:02:59 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1
00:06:04.462    10:02:59 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-:
00:06:04.462    10:02:59 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2
00:06:04.462    10:02:59 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<'
00:06:04.462    10:02:59 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2
00:06:04.462    10:02:59 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1
00:06:04.462    10:02:59 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:04.462    10:02:59 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in
00:06:04.462    10:02:59 event.event_scheduler -- scripts/common.sh@345 -- # : 1
00:06:04.462    10:02:59 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:04.462    10:02:59 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:04.462     10:02:59 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1
00:06:04.462     10:02:59 event.event_scheduler -- scripts/common.sh@353 -- # local d=1
00:06:04.462     10:02:59 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:04.462     10:02:59 event.event_scheduler -- scripts/common.sh@355 -- # echo 1
00:06:04.462    10:02:59 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1
00:06:04.462     10:02:59 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2
00:06:04.462     10:02:59 event.event_scheduler -- scripts/common.sh@353 -- # local d=2
00:06:04.462     10:02:59 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:04.462     10:02:59 event.event_scheduler -- scripts/common.sh@355 -- # echo 2
00:06:04.462    10:02:59 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2
00:06:04.462    10:02:59 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:04.462    10:02:59 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:04.462    10:02:59 event.event_scheduler -- scripts/common.sh@368 -- # return 0
00:06:04.462    10:02:59 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:04.462    10:02:59 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:06:04.462  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:04.462  		--rc genhtml_branch_coverage=1
00:06:04.462  		--rc genhtml_function_coverage=1
00:06:04.462  		--rc genhtml_legend=1
00:06:04.462  		--rc geninfo_all_blocks=1
00:06:04.462  		--rc geninfo_unexecuted_blocks=1
00:06:04.462  		
00:06:04.462  		'
00:06:04.462    10:02:59 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:06:04.462  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:04.462  		--rc genhtml_branch_coverage=1
00:06:04.462  		--rc genhtml_function_coverage=1
00:06:04.462  		--rc genhtml_legend=1
00:06:04.462  		--rc geninfo_all_blocks=1
00:06:04.462  		--rc geninfo_unexecuted_blocks=1
00:06:04.462  		
00:06:04.462  		'
00:06:04.462    10:02:59 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:06:04.462  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:04.462  		--rc genhtml_branch_coverage=1
00:06:04.462  		--rc genhtml_function_coverage=1
00:06:04.462  		--rc genhtml_legend=1
00:06:04.462  		--rc geninfo_all_blocks=1
00:06:04.462  		--rc geninfo_unexecuted_blocks=1
00:06:04.462  		
00:06:04.462  		'
00:06:04.462    10:02:59 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:06:04.462  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:04.462  		--rc genhtml_branch_coverage=1
00:06:04.462  		--rc genhtml_function_coverage=1
00:06:04.462  		--rc genhtml_legend=1
00:06:04.462  		--rc geninfo_all_blocks=1
00:06:04.462  		--rc geninfo_unexecuted_blocks=1
00:06:04.462  		
00:06:04.462  		'
00:06:04.462   10:02:59 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd
00:06:04.462   10:02:59 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1741946
00:06:04.462   10:02:59 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f
00:06:04.462   10:02:59 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT
00:06:04.462   10:02:59 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1741946
00:06:04.462   10:02:59 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1741946 ']'
00:06:04.462   10:02:59 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:04.462   10:02:59 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:04.462   10:02:59 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:04.462  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:04.462   10:02:59 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:04.462   10:02:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:06:04.462  [2024-11-20 10:02:59.483571] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:06:04.462  [2024-11-20 10:02:59.483713] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1741946 ]
00:06:04.720  [2024-11-20 10:02:59.622369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:06:04.720  [2024-11-20 10:02:59.750012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:04.720  [2024-11-20 10:02:59.750086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:06:04.720  [2024-11-20 10:02:59.750126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:06:04.720  [2024-11-20 10:02:59.750150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:06:05.653   10:03:00 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:05.653   10:03:00 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0
00:06:05.653   10:03:00 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic
00:06:05.653   10:03:00 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:05.653   10:03:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:06:05.653  [2024-11-20 10:03:00.453296] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings
00:06:05.653  [2024-11-20 10:03:00.453342] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor
00:06:05.653  [2024-11-20 10:03:00.453388] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20
00:06:05.653  [2024-11-20 10:03:00.453415] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80
00:06:05.653  [2024-11-20 10:03:00.453435] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95
00:06:05.653   10:03:00 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:05.653   10:03:00 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init
00:06:05.653   10:03:00 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:05.653   10:03:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:06:05.912  [2024-11-20 10:03:00.788574] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started.
00:06:05.912   10:03:00 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:05.912   10:03:00 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread
00:06:05.912   10:03:00 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:05.912   10:03:00 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:05.912   10:03:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:06:05.912  ************************************
00:06:05.912  START TEST scheduler_create_thread
00:06:05.912  ************************************
00:06:05.912   10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread
00:06:05.912   10:03:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100
00:06:05.912   10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:05.912   10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:06:05.912  2
00:06:05.912   10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:05.912   10:03:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100
00:06:05.912   10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:05.912   10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:06:05.912  3
00:06:05.912   10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:05.912   10:03:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100
00:06:05.912   10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:05.912   10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:06:05.912  4
00:06:05.912   10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:05.912   10:03:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100
00:06:05.912   10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:05.912   10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:06:05.912  5
00:06:05.912   10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:05.912   10:03:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0
00:06:05.912   10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:05.912   10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:06:05.912  6
00:06:05.912   10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:05.912   10:03:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0
00:06:05.912   10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:05.912   10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:06:05.912  7
00:06:05.912   10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:05.912   10:03:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0
00:06:05.912   10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:05.912   10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:06:05.912  8
00:06:05.912   10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:05.912   10:03:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0
00:06:05.912   10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:05.912   10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:06:05.912  9
00:06:05.912   10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:05.913   10:03:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30
00:06:05.913   10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:05.913   10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:06:05.913  10
00:06:05.913   10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:05.913    10:03:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0
00:06:05.913    10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:05.913    10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:06:05.913    10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:05.913   10:03:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11
00:06:05.913   10:03:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50
00:06:05.913   10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:05.913   10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:06:05.913   10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:05.913    10:03:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100
00:06:05.913    10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:05.913    10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:06:05.913    10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:05.913   10:03:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12
00:06:05.913   10:03:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12
00:06:05.913   10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:05.913   10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:06:05.913   10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:05.913  
00:06:05.913  real	0m0.109s
00:06:05.913  user	0m0.009s
00:06:05.913  sys	0m0.005s
00:06:05.913   10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:05.913   10:03:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:06:05.913  ************************************
00:06:05.913  END TEST scheduler_create_thread
00:06:05.913  ************************************
00:06:05.913   10:03:00 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT
00:06:05.913   10:03:00 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1741946
00:06:05.913   10:03:00 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1741946 ']'
00:06:05.913   10:03:00 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1741946
00:06:05.913    10:03:00 event.event_scheduler -- common/autotest_common.sh@959 -- # uname
00:06:05.913   10:03:00 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:05.913    10:03:00 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1741946
00:06:05.913   10:03:00 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:06:05.913   10:03:00 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:06:05.913   10:03:00 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1741946'
00:06:05.913  killing process with pid 1741946
00:06:05.913   10:03:00 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1741946
00:06:05.913   10:03:00 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1741946
00:06:06.479  [2024-11-20 10:03:01.408044] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped.
00:06:07.414  
00:06:07.414  real	0m3.167s
00:06:07.414  user	0m5.452s
00:06:07.414  sys	0m0.509s
00:06:07.414   10:03:02 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:07.414   10:03:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:06:07.414  ************************************
00:06:07.414  END TEST event_scheduler
00:06:07.414  ************************************
00:06:07.414   10:03:02 event -- event/event.sh@51 -- # modprobe -n nbd
00:06:07.414   10:03:02 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test
00:06:07.414   10:03:02 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:07.414   10:03:02 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:07.414   10:03:02 event -- common/autotest_common.sh@10 -- # set +x
00:06:07.414  ************************************
00:06:07.414  START TEST app_repeat
00:06:07.414  ************************************
00:06:07.414   10:03:02 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test
00:06:07.414   10:03:02 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:07.414   10:03:02 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:07.414   10:03:02 event.app_repeat -- event/event.sh@13 -- # local nbd_list
00:06:07.414   10:03:02 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1')
00:06:07.414   10:03:02 event.app_repeat -- event/event.sh@14 -- # local bdev_list
00:06:07.414   10:03:02 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4
00:06:07.414   10:03:02 event.app_repeat -- event/event.sh@17 -- # modprobe nbd
00:06:07.414   10:03:02 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1742509
00:06:07.414   10:03:02 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4
00:06:07.414   10:03:02 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT
00:06:07.414   10:03:02 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1742509'
00:06:07.414  Process app_repeat pid: 1742509
00:06:07.414   10:03:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:06:07.414   10:03:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0'
00:06:07.414  spdk_app_start Round 0
00:06:07.414   10:03:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1742509 /var/tmp/spdk-nbd.sock
00:06:07.414   10:03:02 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1742509 ']'
00:06:07.414   10:03:02 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:06:07.414   10:03:02 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:07.414   10:03:02 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:06:07.414  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:06:07.414   10:03:02 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:07.414   10:03:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:06:07.414  [2024-11-20 10:03:02.522425] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:06:07.414  [2024-11-20 10:03:02.522587] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1742509 ]
00:06:07.672  [2024-11-20 10:03:02.659482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:06:07.672  [2024-11-20 10:03:02.782153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:07.672  [2024-11-20 10:03:02.782172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:06:08.674   10:03:03 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:08.674   10:03:03 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:06:08.674   10:03:03 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:06:08.932  Malloc0
00:06:08.932   10:03:03 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:06:09.191  Malloc1
00:06:09.191   10:03:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:06:09.191   10:03:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:09.191   10:03:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:06:09.191   10:03:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:06:09.191   10:03:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:09.191   10:03:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:06:09.191   10:03:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:06:09.191   10:03:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:09.191   10:03:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:06:09.191   10:03:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:06:09.191   10:03:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:09.191   10:03:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:06:09.191   10:03:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:06:09.191   10:03:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:06:09.191   10:03:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:06:09.191   10:03:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:06:09.448  /dev/nbd0
00:06:09.448    10:03:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:06:09.448   10:03:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:06:09.448   10:03:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:06:09.448   10:03:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:06:09.448   10:03:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:06:09.448   10:03:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:06:09.448   10:03:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:06:09.448   10:03:04 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:06:09.448   10:03:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:06:09.448   10:03:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:06:09.448   10:03:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:06:09.448  1+0 records in
00:06:09.448  1+0 records out
00:06:09.448  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200759 s, 20.4 MB/s
00:06:09.448    10:03:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:06:09.448   10:03:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:06:09.448   10:03:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:06:09.448   10:03:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:06:09.448   10:03:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:06:09.448   10:03:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:06:09.448   10:03:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:06:09.448   10:03:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:06:10.014  /dev/nbd1
00:06:10.014    10:03:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:06:10.014   10:03:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:06:10.014   10:03:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:06:10.014   10:03:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:06:10.014   10:03:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:06:10.014   10:03:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:06:10.014   10:03:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:06:10.014   10:03:04 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:06:10.014   10:03:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:06:10.014   10:03:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:06:10.014   10:03:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:06:10.014  1+0 records in
00:06:10.014  1+0 records out
00:06:10.014  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218215 s, 18.8 MB/s
00:06:10.014    10:03:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:06:10.014   10:03:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:06:10.014   10:03:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:06:10.014   10:03:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:06:10.014   10:03:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:06:10.014   10:03:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:06:10.014   10:03:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:06:10.014    10:03:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:06:10.014    10:03:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:10.014     10:03:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:06:10.273    10:03:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:06:10.273    {
00:06:10.273      "nbd_device": "/dev/nbd0",
00:06:10.273      "bdev_name": "Malloc0"
00:06:10.273    },
00:06:10.273    {
00:06:10.273      "nbd_device": "/dev/nbd1",
00:06:10.273      "bdev_name": "Malloc1"
00:06:10.273    }
00:06:10.273  ]'
00:06:10.273     10:03:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:06:10.273    {
00:06:10.273      "nbd_device": "/dev/nbd0",
00:06:10.273      "bdev_name": "Malloc0"
00:06:10.273    },
00:06:10.273    {
00:06:10.273      "nbd_device": "/dev/nbd1",
00:06:10.273      "bdev_name": "Malloc1"
00:06:10.273    }
00:06:10.273  ]'
00:06:10.273     10:03:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:06:10.273    10:03:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:06:10.273  /dev/nbd1'
00:06:10.273     10:03:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:06:10.273  /dev/nbd1'
00:06:10.273     10:03:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:06:10.273    10:03:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:06:10.273    10:03:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:06:10.273   10:03:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:06:10.273   10:03:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:06:10.273   10:03:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:06:10.273   10:03:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:10.273   10:03:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:06:10.273   10:03:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:06:10.273   10:03:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:06:10.273   10:03:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:06:10.273   10:03:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256
00:06:10.273  256+0 records in
00:06:10.273  256+0 records out
00:06:10.273  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00382432 s, 274 MB/s
00:06:10.273   10:03:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:06:10.273   10:03:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:06:10.273  256+0 records in
00:06:10.273  256+0 records out
00:06:10.273  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247342 s, 42.4 MB/s
00:06:10.273   10:03:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:06:10.273   10:03:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:06:10.273  256+0 records in
00:06:10.273  256+0 records out
00:06:10.273  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0298119 s, 35.2 MB/s
00:06:10.273   10:03:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:06:10.273   10:03:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:10.273   10:03:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:06:10.273   10:03:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:06:10.273   10:03:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:06:10.273   10:03:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:06:10.273   10:03:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:06:10.273   10:03:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:06:10.273   10:03:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0
00:06:10.273   10:03:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:06:10.273   10:03:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1
00:06:10.273   10:03:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:06:10.273   10:03:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:06:10.273   10:03:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:10.273   10:03:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:10.273   10:03:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:06:10.273   10:03:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:06:10.273   10:03:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:10.273   10:03:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:06:10.535    10:03:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:06:10.535   10:03:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:06:10.535   10:03:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:06:10.535   10:03:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:10.535   10:03:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:10.535   10:03:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:06:10.535   10:03:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:06:10.535   10:03:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:06:10.535   10:03:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:10.535   10:03:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:06:10.795    10:03:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:06:10.795   10:03:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:06:10.795   10:03:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:06:10.795   10:03:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:10.795   10:03:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:10.795   10:03:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:06:10.795   10:03:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:06:10.795   10:03:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:06:10.795    10:03:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:06:10.795    10:03:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:10.795     10:03:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:06:11.053    10:03:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:06:11.053     10:03:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:06:11.053     10:03:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:06:11.311    10:03:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:06:11.311     10:03:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:06:11.311     10:03:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:06:11.311     10:03:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:06:11.311    10:03:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:06:11.311    10:03:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:06:11.311   10:03:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:06:11.311   10:03:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:06:11.311   10:03:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:06:11.311   10:03:06 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:06:11.569   10:03:06 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:06:12.943  [2024-11-20 10:03:07.676612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:06:12.943  [2024-11-20 10:03:07.788736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:06:12.943  [2024-11-20 10:03:07.788737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:12.943  [2024-11-20 10:03:07.970900] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:06:12.943  [2024-11-20 10:03:07.970980] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:06:14.841   10:03:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:06:14.841   10:03:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1'
00:06:14.841  spdk_app_start Round 1
00:06:14.841   10:03:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1742509 /var/tmp/spdk-nbd.sock
00:06:14.841   10:03:09 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1742509 ']'
00:06:14.841   10:03:09 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:06:14.841   10:03:09 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:14.841   10:03:09 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:06:14.841  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:06:14.841   10:03:09 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:14.841   10:03:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:06:14.841   10:03:09 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:14.841   10:03:09 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:06:14.841   10:03:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:06:15.405  Malloc0
00:06:15.405   10:03:10 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:06:15.663  Malloc1
00:06:15.663   10:03:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:06:15.663   10:03:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:15.664   10:03:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:06:15.664   10:03:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:06:15.664   10:03:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:15.664   10:03:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:06:15.664   10:03:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:06:15.664   10:03:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:15.664   10:03:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:06:15.664   10:03:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:06:15.664   10:03:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:15.664   10:03:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:06:15.664   10:03:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:06:15.664   10:03:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:06:15.664   10:03:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:06:15.664   10:03:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:06:15.921  /dev/nbd0
00:06:15.921    10:03:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:06:15.921   10:03:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:06:15.921   10:03:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:06:15.921   10:03:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:06:15.921   10:03:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:06:15.921   10:03:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:06:15.921   10:03:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:06:15.921   10:03:10 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:06:15.921   10:03:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:06:15.921   10:03:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:06:15.921   10:03:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:06:15.921  1+0 records in
00:06:15.921  1+0 records out
00:06:15.921  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218261 s, 18.8 MB/s
00:06:15.921    10:03:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:06:15.921   10:03:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:06:15.921   10:03:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:06:15.921   10:03:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:06:15.921   10:03:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:06:15.921   10:03:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:06:15.921   10:03:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:06:15.921   10:03:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:06:16.179  /dev/nbd1
00:06:16.179    10:03:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:06:16.179   10:03:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:06:16.179   10:03:11 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:06:16.179   10:03:11 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:06:16.179   10:03:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:06:16.179   10:03:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:06:16.179   10:03:11 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:06:16.179   10:03:11 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:06:16.179   10:03:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:06:16.179   10:03:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:06:16.179   10:03:11 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:06:16.179  1+0 records in
00:06:16.179  1+0 records out
00:06:16.179  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231318 s, 17.7 MB/s
00:06:16.179    10:03:11 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:06:16.179   10:03:11 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:06:16.179   10:03:11 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:06:16.179   10:03:11 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:06:16.179   10:03:11 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:06:16.179   10:03:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:06:16.179   10:03:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:06:16.179    10:03:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:06:16.179    10:03:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:16.179     10:03:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:06:16.436    10:03:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:06:16.437    {
00:06:16.437      "nbd_device": "/dev/nbd0",
00:06:16.437      "bdev_name": "Malloc0"
00:06:16.437    },
00:06:16.437    {
00:06:16.437      "nbd_device": "/dev/nbd1",
00:06:16.437      "bdev_name": "Malloc1"
00:06:16.437    }
00:06:16.437  ]'
00:06:16.437     10:03:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:06:16.437    {
00:06:16.437      "nbd_device": "/dev/nbd0",
00:06:16.437      "bdev_name": "Malloc0"
00:06:16.437    },
00:06:16.437    {
00:06:16.437      "nbd_device": "/dev/nbd1",
00:06:16.437      "bdev_name": "Malloc1"
00:06:16.437    }
00:06:16.437  ]'
00:06:16.437     10:03:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:06:16.695    10:03:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:06:16.695  /dev/nbd1'
00:06:16.695     10:03:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:06:16.695  /dev/nbd1'
00:06:16.695     10:03:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:06:16.695    10:03:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:06:16.695    10:03:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:06:16.695   10:03:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:06:16.695   10:03:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:06:16.695   10:03:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:06:16.695   10:03:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:16.695   10:03:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:06:16.695   10:03:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:06:16.695   10:03:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:06:16.695   10:03:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:06:16.695   10:03:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256
00:06:16.695  256+0 records in
00:06:16.695  256+0 records out
00:06:16.695  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00383984 s, 273 MB/s
00:06:16.695   10:03:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:06:16.695   10:03:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:06:16.695  256+0 records in
00:06:16.695  256+0 records out
00:06:16.695  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254019 s, 41.3 MB/s
00:06:16.695   10:03:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:06:16.695   10:03:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:06:16.695  256+0 records in
00:06:16.695  256+0 records out
00:06:16.695  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0298173 s, 35.2 MB/s
00:06:16.695   10:03:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:06:16.695   10:03:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:16.695   10:03:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:06:16.695   10:03:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:06:16.695   10:03:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:06:16.695   10:03:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:06:16.695   10:03:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:06:16.695   10:03:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:06:16.695   10:03:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0
00:06:16.695   10:03:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:06:16.695   10:03:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1
00:06:16.695   10:03:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:06:16.695   10:03:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:06:16.695   10:03:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:16.695   10:03:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:16.695   10:03:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:06:16.695   10:03:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:06:16.695   10:03:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:16.695   10:03:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:06:16.953    10:03:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:06:16.954   10:03:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:06:16.954   10:03:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:06:16.954   10:03:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:16.954   10:03:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:16.954   10:03:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:06:16.954   10:03:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:06:16.954   10:03:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:06:16.954   10:03:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:16.954   10:03:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:06:17.211    10:03:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:06:17.211   10:03:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:06:17.211   10:03:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:06:17.211   10:03:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:17.211   10:03:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:17.211   10:03:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:06:17.211   10:03:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:06:17.211   10:03:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:06:17.211    10:03:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:06:17.211    10:03:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:17.211     10:03:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:06:17.469    10:03:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:06:17.469     10:03:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:06:17.469     10:03:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:06:17.469    10:03:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:06:17.469     10:03:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:06:17.469     10:03:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:06:17.469     10:03:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:06:17.469    10:03:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:06:17.469    10:03:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:06:17.469   10:03:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:06:17.469   10:03:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:06:17.469   10:03:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:06:17.469   10:03:12 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:06:18.034   10:03:12 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:06:18.969  [2024-11-20 10:03:14.017767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:06:19.227  [2024-11-20 10:03:14.129583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:19.227  [2024-11-20 10:03:14.129586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:06:19.227  [2024-11-20 10:03:14.314155] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:06:19.227  [2024-11-20 10:03:14.314218] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:06:21.128   10:03:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:06:21.128   10:03:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2'
00:06:21.128  spdk_app_start Round 2
00:06:21.128   10:03:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1742509 /var/tmp/spdk-nbd.sock
00:06:21.128   10:03:15 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1742509 ']'
00:06:21.128   10:03:15 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:06:21.128   10:03:15 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:21.128   10:03:15 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:06:21.128  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:06:21.128   10:03:15 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:21.128   10:03:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:06:21.386   10:03:16 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:21.386   10:03:16 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:06:21.386   10:03:16 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:06:21.643  Malloc0
00:06:21.643   10:03:16 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:06:21.902  Malloc1
00:06:21.902   10:03:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:06:21.902   10:03:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:21.902   10:03:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:06:21.902   10:03:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:06:21.902   10:03:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:21.902   10:03:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:06:21.902   10:03:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:06:21.902   10:03:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:21.902   10:03:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:06:21.902   10:03:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:06:21.902   10:03:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:21.902   10:03:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:06:21.902   10:03:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:06:21.902   10:03:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:06:21.902   10:03:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:06:21.902   10:03:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:06:22.160  /dev/nbd0
00:06:22.160    10:03:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:06:22.160   10:03:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:06:22.160   10:03:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:06:22.160   10:03:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:06:22.160   10:03:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:06:22.160   10:03:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:06:22.160   10:03:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:06:22.160   10:03:17 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:06:22.160   10:03:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:06:22.160   10:03:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:06:22.160   10:03:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:06:22.160  1+0 records in
00:06:22.160  1+0 records out
00:06:22.160  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197421 s, 20.7 MB/s
00:06:22.160    10:03:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:06:22.160   10:03:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:06:22.160   10:03:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:06:22.160   10:03:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:06:22.160   10:03:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:06:22.160   10:03:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:06:22.160   10:03:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:06:22.160   10:03:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:06:22.418  /dev/nbd1
00:06:22.418    10:03:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:06:22.418   10:03:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:06:22.418   10:03:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:06:22.418   10:03:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:06:22.418   10:03:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:06:22.418   10:03:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:06:22.418   10:03:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:06:22.418   10:03:17 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:06:22.418   10:03:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:06:22.418   10:03:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:06:22.418   10:03:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:06:22.418  1+0 records in
00:06:22.418  1+0 records out
00:06:22.418  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184894 s, 22.2 MB/s
00:06:22.418    10:03:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:06:22.418   10:03:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:06:22.418   10:03:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:06:22.418   10:03:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:06:22.418   10:03:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:06:22.418   10:03:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:06:22.418   10:03:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:06:22.418    10:03:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:06:22.418    10:03:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:22.418     10:03:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:06:22.984    10:03:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:06:22.984    {
00:06:22.984      "nbd_device": "/dev/nbd0",
00:06:22.984      "bdev_name": "Malloc0"
00:06:22.984    },
00:06:22.984    {
00:06:22.984      "nbd_device": "/dev/nbd1",
00:06:22.984      "bdev_name": "Malloc1"
00:06:22.984    }
00:06:22.984  ]'
00:06:22.984     10:03:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:06:22.984    {
00:06:22.984      "nbd_device": "/dev/nbd0",
00:06:22.984      "bdev_name": "Malloc0"
00:06:22.984    },
00:06:22.984    {
00:06:22.984      "nbd_device": "/dev/nbd1",
00:06:22.984      "bdev_name": "Malloc1"
00:06:22.984    }
00:06:22.984  ]'
00:06:22.984     10:03:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:06:22.984    10:03:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:06:22.984  /dev/nbd1'
00:06:22.984     10:03:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:06:22.984  /dev/nbd1'
00:06:22.984     10:03:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:06:22.984    10:03:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:06:22.984    10:03:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:06:22.984   10:03:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:06:22.984   10:03:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:06:22.984   10:03:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:06:22.984   10:03:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:22.984   10:03:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:06:22.984   10:03:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:06:22.984   10:03:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:06:22.984   10:03:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:06:22.984   10:03:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256
00:06:22.984  256+0 records in
00:06:22.984  256+0 records out
00:06:22.984  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00386905 s, 271 MB/s
00:06:22.984   10:03:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:06:22.984   10:03:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:06:22.984  256+0 records in
00:06:22.984  256+0 records out
00:06:22.984  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249878 s, 42.0 MB/s
00:06:22.984   10:03:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:06:22.984   10:03:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:06:22.984  256+0 records in
00:06:22.984  256+0 records out
00:06:22.984  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0277657 s, 37.8 MB/s
00:06:22.984   10:03:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:06:22.984   10:03:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:22.984   10:03:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:06:22.984   10:03:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:06:22.984   10:03:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:06:22.984   10:03:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:06:22.984   10:03:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:06:22.984   10:03:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:06:22.984   10:03:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0
00:06:22.984   10:03:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:06:22.984   10:03:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1
00:06:22.984   10:03:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:06:22.984   10:03:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:06:22.984   10:03:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:22.984   10:03:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:22.984   10:03:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:06:22.984   10:03:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:06:22.984   10:03:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:22.984   10:03:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:06:23.242    10:03:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:06:23.242   10:03:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:06:23.242   10:03:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:06:23.242   10:03:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:23.242   10:03:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:23.242   10:03:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:06:23.242   10:03:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:06:23.242   10:03:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:06:23.242   10:03:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:23.242   10:03:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:06:23.501    10:03:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:06:23.501   10:03:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:06:23.501   10:03:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:06:23.501   10:03:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:23.501   10:03:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:23.501   10:03:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:06:23.501   10:03:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:06:23.501   10:03:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:06:23.501    10:03:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:06:23.501    10:03:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:23.501     10:03:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:06:23.759    10:03:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:06:23.759     10:03:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:06:23.759     10:03:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:06:23.759    10:03:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:06:23.759     10:03:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:06:23.759     10:03:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:06:23.759     10:03:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:06:23.759    10:03:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:06:23.759    10:03:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:06:23.759   10:03:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:06:23.759   10:03:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:06:23.759   10:03:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:06:23.759   10:03:18 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:06:24.324   10:03:19 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:06:25.257  [2024-11-20 10:03:20.323602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:06:25.516  [2024-11-20 10:03:20.441760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:06:25.516  [2024-11-20 10:03:20.441762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:25.516  [2024-11-20 10:03:20.629696] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:06:25.516  [2024-11-20 10:03:20.629767] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:06:27.413   10:03:22 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1742509 /var/tmp/spdk-nbd.sock
00:06:27.413   10:03:22 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1742509 ']'
00:06:27.413   10:03:22 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:06:27.413   10:03:22 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:27.413   10:03:22 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:06:27.413  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:06:27.413   10:03:22 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:27.413   10:03:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:06:27.671   10:03:22 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:27.671   10:03:22 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:06:27.671   10:03:22 event.app_repeat -- event/event.sh@39 -- # killprocess 1742509
00:06:27.671   10:03:22 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1742509 ']'
00:06:27.671   10:03:22 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1742509
00:06:27.671    10:03:22 event.app_repeat -- common/autotest_common.sh@959 -- # uname
00:06:27.671   10:03:22 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:27.671    10:03:22 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1742509
00:06:27.671   10:03:22 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:27.671   10:03:22 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:27.671   10:03:22 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1742509'
00:06:27.671  killing process with pid 1742509
00:06:27.671   10:03:22 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1742509
00:06:27.671   10:03:22 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1742509
00:06:28.605  spdk_app_start is called in Round 0.
00:06:28.605  Shutdown signal received, stop current app iteration
00:06:28.605  Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 reinitialization...
00:06:28.605  spdk_app_start is called in Round 1.
00:06:28.605  Shutdown signal received, stop current app iteration
00:06:28.605  Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 reinitialization...
00:06:28.605  spdk_app_start is called in Round 2.
00:06:28.605  Shutdown signal received, stop current app iteration
00:06:28.605  Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 reinitialization...
00:06:28.605  spdk_app_start is called in Round 3.
00:06:28.605  Shutdown signal received, stop current app iteration
00:06:28.605   10:03:23 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT
00:06:28.605   10:03:23 event.app_repeat -- event/event.sh@42 -- # return 0
00:06:28.605  
00:06:28.605  real	0m21.023s
00:06:28.605  user	0m45.195s
00:06:28.605  sys	0m3.443s
00:06:28.605   10:03:23 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:28.605   10:03:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:06:28.605  ************************************
00:06:28.605  END TEST app_repeat
00:06:28.605  ************************************
00:06:28.605   10:03:23 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 ))
00:06:28.605   10:03:23 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/cpu_locks.sh
00:06:28.605   10:03:23 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:28.605   10:03:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:28.605   10:03:23 event -- common/autotest_common.sh@10 -- # set +x
00:06:28.605  ************************************
00:06:28.605  START TEST cpu_locks
00:06:28.605  ************************************
00:06:28.605   10:03:23 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/cpu_locks.sh
00:06:28.605  * Looking for test storage...
00:06:28.605  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event
00:06:28.605    10:03:23 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:06:28.605     10:03:23 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version
00:06:28.605     10:03:23 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:06:28.605    10:03:23 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:06:28.605    10:03:23 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:28.605    10:03:23 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:28.605    10:03:23 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:28.605    10:03:23 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-:
00:06:28.605    10:03:23 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1
00:06:28.605    10:03:23 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-:
00:06:28.605    10:03:23 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2
00:06:28.605    10:03:23 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<'
00:06:28.605    10:03:23 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2
00:06:28.605    10:03:23 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1
00:06:28.605    10:03:23 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:28.605    10:03:23 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in
00:06:28.605    10:03:23 event.cpu_locks -- scripts/common.sh@345 -- # : 1
00:06:28.605    10:03:23 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:28.605    10:03:23 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:28.605     10:03:23 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1
00:06:28.605     10:03:23 event.cpu_locks -- scripts/common.sh@353 -- # local d=1
00:06:28.605     10:03:23 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:28.605     10:03:23 event.cpu_locks -- scripts/common.sh@355 -- # echo 1
00:06:28.606    10:03:23 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1
00:06:28.606     10:03:23 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2
00:06:28.606     10:03:23 event.cpu_locks -- scripts/common.sh@353 -- # local d=2
00:06:28.606     10:03:23 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:28.606     10:03:23 event.cpu_locks -- scripts/common.sh@355 -- # echo 2
00:06:28.606    10:03:23 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2
00:06:28.606    10:03:23 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:28.606    10:03:23 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:28.606    10:03:23 event.cpu_locks -- scripts/common.sh@368 -- # return 0
00:06:28.606    10:03:23 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:28.606    10:03:23 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:06:28.606  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:28.606  		--rc genhtml_branch_coverage=1
00:06:28.606  		--rc genhtml_function_coverage=1
00:06:28.606  		--rc genhtml_legend=1
00:06:28.606  		--rc geninfo_all_blocks=1
00:06:28.606  		--rc geninfo_unexecuted_blocks=1
00:06:28.606  		
00:06:28.606  		'
00:06:28.606    10:03:23 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:06:28.606  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:28.606  		--rc genhtml_branch_coverage=1
00:06:28.606  		--rc genhtml_function_coverage=1
00:06:28.606  		--rc genhtml_legend=1
00:06:28.606  		--rc geninfo_all_blocks=1
00:06:28.606  		--rc geninfo_unexecuted_blocks=1
00:06:28.606  		
00:06:28.606  		'
00:06:28.606    10:03:23 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:06:28.606  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:28.606  		--rc genhtml_branch_coverage=1
00:06:28.606  		--rc genhtml_function_coverage=1
00:06:28.606  		--rc genhtml_legend=1
00:06:28.606  		--rc geninfo_all_blocks=1
00:06:28.606  		--rc geninfo_unexecuted_blocks=1
00:06:28.606  		
00:06:28.606  		'
00:06:28.606    10:03:23 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:06:28.606  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:28.606  		--rc genhtml_branch_coverage=1
00:06:28.606  		--rc genhtml_function_coverage=1
00:06:28.606  		--rc genhtml_legend=1
00:06:28.606  		--rc geninfo_all_blocks=1
00:06:28.606  		--rc geninfo_unexecuted_blocks=1
00:06:28.606  		
00:06:28.606  		'
00:06:28.606   10:03:23 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock
00:06:28.606   10:03:23 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock
00:06:28.606   10:03:23 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT
00:06:28.606   10:03:23 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks
00:06:28.606   10:03:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:28.606   10:03:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:28.606   10:03:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:06:28.606  ************************************
00:06:28.606  START TEST default_locks
00:06:28.606  ************************************
00:06:28.606   10:03:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks
00:06:28.606   10:03:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1745760
00:06:28.606   10:03:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:06:28.606   10:03:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1745760
00:06:28.606   10:03:23 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1745760 ']'
00:06:28.606   10:03:23 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:28.606   10:03:23 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:28.606   10:03:23 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:28.606  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:28.606   10:03:23 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:28.606   10:03:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:06:28.865  [2024-11-20 10:03:23.821380] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:06:28.865  [2024-11-20 10:03:23.821518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1745760 ]
00:06:28.865  [2024-11-20 10:03:23.955387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:29.123  [2024-11-20 10:03:24.074393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:30.057   10:03:24 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:30.057   10:03:24 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0
00:06:30.057   10:03:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1745760
00:06:30.057   10:03:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1745760
00:06:30.057   10:03:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:06:30.314  lslocks: write error
00:06:30.314   10:03:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1745760
00:06:30.314   10:03:25 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1745760 ']'
00:06:30.314   10:03:25 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1745760
00:06:30.314    10:03:25 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname
00:06:30.314   10:03:25 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:30.314    10:03:25 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1745760
00:06:30.314   10:03:25 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:30.314   10:03:25 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:30.314   10:03:25 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1745760'
00:06:30.314  killing process with pid 1745760
00:06:30.314   10:03:25 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1745760
00:06:30.314   10:03:25 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1745760
00:06:32.212   10:03:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1745760
00:06:32.212   10:03:27 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0
00:06:32.212   10:03:27 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1745760
00:06:32.212   10:03:27 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:06:32.212   10:03:27 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:32.212    10:03:27 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:06:32.212   10:03:27 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:32.212   10:03:27 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1745760
00:06:32.212   10:03:27 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1745760 ']'
00:06:32.212   10:03:27 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:32.212   10:03:27 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:32.212   10:03:27 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:32.212  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:32.212   10:03:27 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:32.212   10:03:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:06:32.212  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1745760) - No such process
00:06:32.212  ERROR: process (pid: 1745760) is no longer running
00:06:32.212   10:03:27 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:32.212   10:03:27 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1
00:06:32.212   10:03:27 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1
00:06:32.212   10:03:27 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:06:32.212   10:03:27 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:06:32.212   10:03:27 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:06:32.212   10:03:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks
00:06:32.212   10:03:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=()
00:06:32.212   10:03:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files
00:06:32.212   10:03:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:06:32.212  
00:06:32.212  real	0m3.546s
00:06:32.212  user	0m3.583s
00:06:32.212  sys	0m0.741s
00:06:32.212   10:03:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:32.212   10:03:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:06:32.212  ************************************
00:06:32.212  END TEST default_locks
00:06:32.212  ************************************
00:06:32.212   10:03:27 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc
00:06:32.212   10:03:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:32.212   10:03:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:32.212   10:03:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:06:32.212  ************************************
00:06:32.212  START TEST default_locks_via_rpc
00:06:32.212  ************************************
00:06:32.212   10:03:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc
00:06:32.212   10:03:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1746195
00:06:32.212   10:03:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:06:32.212   10:03:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1746195
00:06:32.212   10:03:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1746195 ']'
00:06:32.212   10:03:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:32.212   10:03:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:32.212   10:03:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:32.212  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:32.212   10:03:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:32.212   10:03:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:32.470  [2024-11-20 10:03:27.420527] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:06:32.470  [2024-11-20 10:03:27.420658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1746195 ]
00:06:32.470  [2024-11-20 10:03:27.553087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:32.728  [2024-11-20 10:03:27.668896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:33.662   10:03:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:33.662   10:03:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:06:33.662   10:03:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks
00:06:33.662   10:03:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:33.662   10:03:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:33.662   10:03:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:33.662   10:03:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks
00:06:33.662   10:03:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=()
00:06:33.662   10:03:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files
00:06:33.662   10:03:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:06:33.662   10:03:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks
00:06:33.662   10:03:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:33.662   10:03:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:33.662   10:03:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:33.662   10:03:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1746195
00:06:33.662   10:03:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1746195
00:06:33.662   10:03:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:06:33.662   10:03:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1746195
00:06:33.662   10:03:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1746195 ']'
00:06:33.662   10:03:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1746195
00:06:33.662    10:03:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname
00:06:33.662   10:03:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:33.662    10:03:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1746195
00:06:33.920   10:03:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:33.920   10:03:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:33.920   10:03:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1746195'
00:06:33.920  killing process with pid 1746195
00:06:33.920   10:03:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1746195
00:06:33.920   10:03:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1746195
00:06:35.821  
00:06:35.821  real	0m3.527s
00:06:35.821  user	0m3.553s
00:06:35.821  sys	0m0.696s
00:06:35.821   10:03:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:35.821   10:03:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:35.821  ************************************
00:06:35.821  END TEST default_locks_via_rpc
00:06:35.821  ************************************
00:06:35.821   10:03:30 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask
00:06:35.821   10:03:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:35.821   10:03:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:35.821   10:03:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:06:35.821  ************************************
00:06:35.821  START TEST non_locking_app_on_locked_coremask
00:06:35.821  ************************************
00:06:35.821   10:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask
00:06:35.821   10:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1746628
00:06:35.821   10:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:06:35.821   10:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1746628 /var/tmp/spdk.sock
00:06:35.821   10:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1746628 ']'
00:06:35.821   10:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:35.821   10:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:35.821   10:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:35.821  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:35.821   10:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:35.821   10:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:06:36.079  [2024-11-20 10:03:31.002146] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:06:36.079  [2024-11-20 10:03:31.002276] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1746628 ]
00:06:36.079  [2024-11-20 10:03:31.137221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:36.336  [2024-11-20 10:03:31.263416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:37.270   10:03:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:37.270   10:03:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:06:37.270   10:03:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1746767
00:06:37.270   10:03:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1746767 /var/tmp/spdk2.sock
00:06:37.270   10:03:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1746767 ']'
00:06:37.270   10:03:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:06:37.270   10:03:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:37.270   10:03:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:06:37.270  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:06:37.270   10:03:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:37.270   10:03:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock
00:06:37.270   10:03:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:06:37.270  [2024-11-20 10:03:32.187289] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:06:37.270  [2024-11-20 10:03:32.187424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1746767 ]
00:06:37.270  [2024-11-20 10:03:32.379085] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:06:37.270  [2024-11-20 10:03:32.379146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:37.528  [2024-11-20 10:03:32.617608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:40.055   10:03:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:40.055   10:03:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:06:40.055   10:03:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1746628
00:06:40.055   10:03:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1746628
00:06:40.055   10:03:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:06:40.620  lslocks: write error
00:06:40.620   10:03:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1746628
00:06:40.620   10:03:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1746628 ']'
00:06:40.620   10:03:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1746628
00:06:40.620    10:03:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:06:40.620   10:03:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:40.620    10:03:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1746628
00:06:40.620   10:03:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:40.620   10:03:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:40.620   10:03:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1746628'
00:06:40.620  killing process with pid 1746628
00:06:40.620   10:03:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1746628
00:06:40.620   10:03:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1746628
00:06:44.803   10:03:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1746767
00:06:44.803   10:03:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1746767 ']'
00:06:44.803   10:03:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1746767
00:06:44.803    10:03:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:06:44.803   10:03:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:44.803    10:03:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1746767
00:06:44.803   10:03:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:44.803   10:03:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:44.803   10:03:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1746767'
00:06:44.803  killing process with pid 1746767
00:06:44.803   10:03:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1746767
00:06:44.803   10:03:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1746767
00:06:46.701  
00:06:46.701  real	0m10.702s
00:06:46.701  user	0m11.142s
00:06:46.701  sys	0m1.523s
00:06:46.701   10:03:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:46.701   10:03:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:06:46.701  ************************************
00:06:46.701  END TEST non_locking_app_on_locked_coremask
00:06:46.701  ************************************
00:06:46.701   10:03:41 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask
00:06:46.701   10:03:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:46.701   10:03:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:46.701   10:03:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:06:46.701  ************************************
00:06:46.701  START TEST locking_app_on_unlocked_coremask
00:06:46.701  ************************************
00:06:46.701   10:03:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask
00:06:46.702   10:03:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1747991
00:06:46.702   10:03:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks
00:06:46.702   10:03:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1747991 /var/tmp/spdk.sock
00:06:46.702   10:03:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1747991 ']'
00:06:46.702   10:03:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:46.702   10:03:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:46.702   10:03:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:46.702  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:46.702   10:03:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:46.702   10:03:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:06:46.702  [2024-11-20 10:03:41.753092] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:06:46.702  [2024-11-20 10:03:41.753246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1747991 ]
00:06:46.960  [2024-11-20 10:03:41.884257] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:06:46.960  [2024-11-20 10:03:41.884303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:46.961  [2024-11-20 10:03:41.999635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:47.991   10:03:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:47.991   10:03:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0
00:06:47.991   10:03:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1748135
00:06:47.991   10:03:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1748135 /var/tmp/spdk2.sock
00:06:47.991   10:03:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:06:47.991   10:03:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1748135 ']'
00:06:47.991   10:03:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:06:47.991   10:03:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:47.991   10:03:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:06:47.991  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:06:47.991   10:03:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:47.991   10:03:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:06:47.991  [2024-11-20 10:03:42.915456] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:06:47.991  [2024-11-20 10:03:42.915621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1748135 ]
00:06:47.991  [2024-11-20 10:03:43.102419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:48.249  [2024-11-20 10:03:43.334684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:50.779   10:03:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:50.779   10:03:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0
00:06:50.779   10:03:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1748135
00:06:50.779   10:03:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1748135
00:06:50.780   10:03:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:06:51.038  lslocks: write error
00:06:51.038   10:03:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1747991
00:06:51.038   10:03:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1747991 ']'
00:06:51.038   10:03:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1747991
00:06:51.038    10:03:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname
00:06:51.038   10:03:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:51.038    10:03:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1747991
00:06:51.038   10:03:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:51.038   10:03:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:51.038   10:03:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1747991'
00:06:51.038  killing process with pid 1747991
00:06:51.038   10:03:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1747991
00:06:51.038   10:03:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1747991
00:06:55.221   10:03:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1748135
00:06:55.221   10:03:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1748135 ']'
00:06:55.221   10:03:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1748135
00:06:55.221    10:03:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname
00:06:55.221   10:03:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:55.221    10:03:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1748135
00:06:55.221   10:03:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:55.221   10:03:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:55.221   10:03:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1748135'
00:06:55.221  killing process with pid 1748135
00:06:55.221   10:03:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1748135
00:06:55.221   10:03:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1748135
00:06:57.122  
00:06:57.122  real	0m10.466s
00:06:57.122  user	0m10.937s
00:06:57.122  sys	0m1.433s
00:06:57.122   10:03:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:57.122   10:03:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:06:57.122  ************************************
00:06:57.122  END TEST locking_app_on_unlocked_coremask
00:06:57.122  ************************************
00:06:57.122   10:03:52 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask
00:06:57.122   10:03:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:57.122   10:03:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:57.122   10:03:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:06:57.122  ************************************
00:06:57.122  START TEST locking_app_on_locked_coremask
00:06:57.122  ************************************
00:06:57.122   10:03:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask
00:06:57.122   10:03:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1749241
00:06:57.122   10:03:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:06:57.122   10:03:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1749241 /var/tmp/spdk.sock
00:06:57.122   10:03:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1749241 ']'
00:06:57.122   10:03:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:57.122   10:03:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:57.122   10:03:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:57.122  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:57.122   10:03:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:57.122   10:03:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:06:57.381  [2024-11-20 10:03:52.276855] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:06:57.381  [2024-11-20 10:03:52.276983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1749241 ]
00:06:57.381  [2024-11-20 10:03:52.412226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:57.640  [2024-11-20 10:03:52.530638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:58.574   10:03:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:58.575   10:03:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:06:58.575   10:03:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1749377
00:06:58.575   10:03:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:06:58.575   10:03:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1749377 /var/tmp/spdk2.sock
00:06:58.575   10:03:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0
00:06:58.575   10:03:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1749377 /var/tmp/spdk2.sock
00:06:58.575   10:03:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:06:58.575   10:03:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:58.575    10:03:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:06:58.575   10:03:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:58.575   10:03:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1749377 /var/tmp/spdk2.sock
00:06:58.575   10:03:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1749377 ']'
00:06:58.575   10:03:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:06:58.575   10:03:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:58.575   10:03:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:06:58.575  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:06:58.575   10:03:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:58.575   10:03:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:06:58.575  [2024-11-20 10:03:53.465593] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:06:58.575  [2024-11-20 10:03:53.465735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1749377 ]
00:06:58.575  [2024-11-20 10:03:53.649194] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1749241 has claimed it.
00:06:58.575  [2024-11-20 10:03:53.649268] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:06:59.141  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1749377) - No such process
00:06:59.141  ERROR: process (pid: 1749377) is no longer running
00:06:59.141   10:03:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:59.141   10:03:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1
00:06:59.141   10:03:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1
00:06:59.141   10:03:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:06:59.141   10:03:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:06:59.141   10:03:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:06:59.141   10:03:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1749241
00:06:59.141   10:03:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1749241
00:06:59.141   10:03:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:06:59.399  lslocks: write error
00:06:59.399   10:03:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1749241
00:06:59.399   10:03:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1749241 ']'
00:06:59.399   10:03:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1749241
00:06:59.399    10:03:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:06:59.399   10:03:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:59.399    10:03:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1749241
00:06:59.656   10:03:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:59.656   10:03:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:59.656   10:03:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1749241'
00:06:59.656  killing process with pid 1749241
00:06:59.656   10:03:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1749241
00:06:59.656   10:03:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1749241
00:07:01.558  
00:07:01.558  real	0m4.364s
00:07:01.558  user	0m4.645s
00:07:01.558  sys	0m0.970s
00:07:01.558   10:03:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:01.558   10:03:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:01.558  ************************************
00:07:01.558  END TEST locking_app_on_locked_coremask
00:07:01.558  ************************************
00:07:01.558   10:03:56 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask
00:07:01.558   10:03:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:01.558   10:03:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:01.558   10:03:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:07:01.558  ************************************
00:07:01.558  START TEST locking_overlapped_coremask
00:07:01.558  ************************************
00:07:01.558   10:03:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask
00:07:01.558   10:03:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1749806
00:07:01.558   10:03:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7
00:07:01.558   10:03:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1749806 /var/tmp/spdk.sock
00:07:01.558   10:03:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1749806 ']'
00:07:01.558   10:03:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:01.558   10:03:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:01.558   10:03:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:01.558  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:01.558   10:03:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:01.558   10:03:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:01.817  [2024-11-20 10:03:56.688137] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:07:01.817  [2024-11-20 10:03:56.688278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1749806 ]
00:07:01.817  [2024-11-20 10:03:56.816926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:07:01.817  [2024-11-20 10:03:56.934279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:07:01.817  [2024-11-20 10:03:56.934359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:01.817  [2024-11-20 10:03:56.934366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:07:02.753   10:03:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:02.753   10:03:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0
00:07:02.753   10:03:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1749951
00:07:02.753   10:03:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock
00:07:02.753   10:03:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1749951 /var/tmp/spdk2.sock
00:07:02.753   10:03:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0
00:07:02.753   10:03:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1749951 /var/tmp/spdk2.sock
00:07:02.753   10:03:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:07:02.753   10:03:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:02.753    10:03:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:07:02.753   10:03:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:02.753   10:03:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1749951 /var/tmp/spdk2.sock
00:07:02.753   10:03:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1749951 ']'
00:07:02.753   10:03:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:07:02.753   10:03:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:02.753   10:03:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:07:02.753  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:07:02.753   10:03:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:02.753   10:03:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:03.012  [2024-11-20 10:03:57.894732] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:07:03.012  [2024-11-20 10:03:57.894893] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1749951 ]
00:07:03.012  [2024-11-20 10:03:58.095825] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1749806 has claimed it.
00:07:03.012  [2024-11-20 10:03:58.095909] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:07:03.579  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1749951) - No such process
00:07:03.579  ERROR: process (pid: 1749951) is no longer running
00:07:03.579   10:03:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:03.579   10:03:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1
00:07:03.579   10:03:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1
00:07:03.579   10:03:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:07:03.579   10:03:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:07:03.579   10:03:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:07:03.579   10:03:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks
00:07:03.579   10:03:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:07:03.579   10:03:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:07:03.579   10:03:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:07:03.579   10:03:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1749806
00:07:03.579   10:03:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1749806 ']'
00:07:03.579   10:03:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1749806
00:07:03.579    10:03:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname
00:07:03.579   10:03:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:03.579    10:03:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1749806
00:07:03.579   10:03:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:03.579   10:03:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:03.579   10:03:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1749806'
00:07:03.579  killing process with pid 1749806
00:07:03.579   10:03:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1749806
00:07:03.579   10:03:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1749806
00:07:06.109  
00:07:06.109  real	0m4.198s
00:07:06.109  user	0m11.470s
00:07:06.109  sys	0m0.779s
00:07:06.109   10:04:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:06.109   10:04:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:06.109  ************************************
00:07:06.109  END TEST locking_overlapped_coremask
00:07:06.109  ************************************
00:07:06.109   10:04:00 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc
00:07:06.109   10:04:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:06.109   10:04:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:06.109   10:04:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:07:06.109  ************************************
00:07:06.110  START TEST locking_overlapped_coremask_via_rpc
00:07:06.110  ************************************
00:07:06.110   10:04:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc
00:07:06.110   10:04:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1750374
00:07:06.110   10:04:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks
00:07:06.110   10:04:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1750374 /var/tmp/spdk.sock
00:07:06.110   10:04:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1750374 ']'
00:07:06.110   10:04:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:06.110   10:04:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:06.110   10:04:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:06.110  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:06.110   10:04:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:06.110   10:04:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:06.110  [2024-11-20 10:04:00.947510] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:07:06.110  [2024-11-20 10:04:00.947657] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1750374 ]
00:07:06.110  [2024-11-20 10:04:01.085634] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:07:06.110  [2024-11-20 10:04:01.085699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:07:06.110  [2024-11-20 10:04:01.207556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:07:06.110  [2024-11-20 10:04:01.207596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:07:06.110  [2024-11-20 10:04:01.207589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:07.045   10:04:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:07.045   10:04:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:07:07.045   10:04:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1750513
00:07:07.045   10:04:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1750513 /var/tmp/spdk2.sock
00:07:07.045   10:04:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1750513 ']'
00:07:07.045   10:04:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:07:07.045   10:04:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:07.045   10:04:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:07:07.045  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:07:07.045   10:04:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:07.045   10:04:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks
00:07:07.045   10:04:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:07.304  [2024-11-20 10:04:02.177943] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:07:07.304  [2024-11-20 10:04:02.178089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1750513 ]
00:07:07.304  [2024-11-20 10:04:02.374596] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:07:07.304  [2024-11-20 10:04:02.374661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:07:07.562  [2024-11-20 10:04:02.634697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:07:07.562  [2024-11-20 10:04:02.634712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:07:07.562  [2024-11-20 10:04:02.634724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:07:10.094   10:04:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:10.094   10:04:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:07:10.094   10:04:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks
00:07:10.094   10:04:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:10.094   10:04:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:10.094   10:04:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:10.094   10:04:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:07:10.094   10:04:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0
00:07:10.094   10:04:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:07:10.094   10:04:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:07:10.094   10:04:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:10.094    10:04:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:07:10.094   10:04:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:10.094   10:04:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:07:10.094   10:04:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:10.094   10:04:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:10.094  [2024-11-20 10:04:04.893707] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1750374 has claimed it.
00:07:10.094  request:
00:07:10.094  {
00:07:10.094  "method": "framework_enable_cpumask_locks",
00:07:10.094  "req_id": 1
00:07:10.094  }
00:07:10.094  Got JSON-RPC error response
00:07:10.094  response:
00:07:10.094  {
00:07:10.094  "code": -32603,
00:07:10.094  "message": "Failed to claim CPU core: 2"
00:07:10.094  }
00:07:10.094   10:04:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:07:10.094   10:04:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1
00:07:10.094   10:04:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:07:10.094   10:04:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:07:10.094   10:04:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:07:10.094   10:04:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1750374 /var/tmp/spdk.sock
00:07:10.094   10:04:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1750374 ']'
00:07:10.094   10:04:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:10.094   10:04:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:10.094   10:04:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:10.094  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:10.094   10:04:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:10.094   10:04:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:10.094   10:04:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:10.094   10:04:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:07:10.094   10:04:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1750513 /var/tmp/spdk2.sock
00:07:10.095   10:04:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1750513 ']'
00:07:10.095   10:04:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:07:10.095   10:04:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:10.095   10:04:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:07:10.095  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:07:10.095   10:04:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:10.095   10:04:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:10.352   10:04:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:10.352   10:04:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:07:10.352   10:04:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks
00:07:10.352   10:04:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:07:10.352   10:04:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:07:10.352   10:04:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:07:10.352  
00:07:10.352  real	0m4.632s
00:07:10.352  user	0m1.615s
00:07:10.352  sys	0m0.267s
00:07:10.352   10:04:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:10.352   10:04:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:10.352  ************************************
00:07:10.352  END TEST locking_overlapped_coremask_via_rpc
00:07:10.352  ************************************
00:07:10.610   10:04:05 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup
00:07:10.610   10:04:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1750374 ]]
00:07:10.610   10:04:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1750374
00:07:10.610   10:04:05 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1750374 ']'
00:07:10.610   10:04:05 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1750374
00:07:10.610    10:04:05 event.cpu_locks -- common/autotest_common.sh@959 -- # uname
00:07:10.610   10:04:05 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:10.610    10:04:05 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1750374
00:07:10.610   10:04:05 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:10.610   10:04:05 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:10.610   10:04:05 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1750374'
00:07:10.610  killing process with pid 1750374
00:07:10.610   10:04:05 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1750374
00:07:10.610   10:04:05 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1750374
00:07:13.142   10:04:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1750513 ]]
00:07:13.142   10:04:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1750513
00:07:13.142   10:04:07 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1750513 ']'
00:07:13.142   10:04:07 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1750513
00:07:13.142    10:04:07 event.cpu_locks -- common/autotest_common.sh@959 -- # uname
00:07:13.142   10:04:07 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:13.142    10:04:07 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1750513
00:07:13.142   10:04:07 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:07:13.142   10:04:07 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:07:13.142   10:04:07 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1750513'
00:07:13.142  killing process with pid 1750513
00:07:13.142   10:04:07 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1750513
00:07:13.142   10:04:07 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1750513
00:07:15.045   10:04:09 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f
00:07:15.045   10:04:09 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup
00:07:15.045   10:04:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1750374 ]]
00:07:15.045   10:04:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1750374
00:07:15.045   10:04:09 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1750374 ']'
00:07:15.045   10:04:09 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1750374
00:07:15.045  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1750374) - No such process
00:07:15.045   10:04:09 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1750374 is not found'
00:07:15.045  Process with pid 1750374 is not found
00:07:15.045   10:04:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1750513 ]]
00:07:15.045   10:04:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1750513
00:07:15.045   10:04:09 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1750513 ']'
00:07:15.045   10:04:09 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1750513
00:07:15.045  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1750513) - No such process
00:07:15.045   10:04:09 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1750513 is not found'
00:07:15.045  Process with pid 1750513 is not found
00:07:15.045   10:04:09 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f
00:07:15.045  
00:07:15.045  real	0m46.406s
00:07:15.045  user	1m22.316s
00:07:15.045  sys	0m7.680s
00:07:15.045   10:04:09 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:15.045   10:04:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:07:15.045  ************************************
00:07:15.045  END TEST cpu_locks
00:07:15.045  ************************************
00:07:15.045  
00:07:15.045  real	1m15.579s
00:07:15.045  user	2m20.244s
00:07:15.045  sys	0m12.332s
00:07:15.045   10:04:09 event -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:15.045   10:04:09 event -- common/autotest_common.sh@10 -- # set +x
00:07:15.045  ************************************
00:07:15.045  END TEST event
00:07:15.045  ************************************
00:07:15.045   10:04:09  -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread/thread.sh
00:07:15.045   10:04:09  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:15.045   10:04:09  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:15.045   10:04:09  -- common/autotest_common.sh@10 -- # set +x
00:07:15.045  ************************************
00:07:15.045  START TEST thread
00:07:15.045  ************************************
00:07:15.045   10:04:10 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread/thread.sh
00:07:15.045  * Looking for test storage...
00:07:15.045  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread
00:07:15.045    10:04:10 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:07:15.045     10:04:10 thread -- common/autotest_common.sh@1693 -- # lcov --version
00:07:15.045     10:04:10 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:07:15.045    10:04:10 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:07:15.045    10:04:10 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:15.045    10:04:10 thread -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:15.045    10:04:10 thread -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:15.045    10:04:10 thread -- scripts/common.sh@336 -- # IFS=.-:
00:07:15.045    10:04:10 thread -- scripts/common.sh@336 -- # read -ra ver1
00:07:15.045    10:04:10 thread -- scripts/common.sh@337 -- # IFS=.-:
00:07:15.045    10:04:10 thread -- scripts/common.sh@337 -- # read -ra ver2
00:07:15.045    10:04:10 thread -- scripts/common.sh@338 -- # local 'op=<'
00:07:15.045    10:04:10 thread -- scripts/common.sh@340 -- # ver1_l=2
00:07:15.045    10:04:10 thread -- scripts/common.sh@341 -- # ver2_l=1
00:07:15.045    10:04:10 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:15.045    10:04:10 thread -- scripts/common.sh@344 -- # case "$op" in
00:07:15.045    10:04:10 thread -- scripts/common.sh@345 -- # : 1
00:07:15.045    10:04:10 thread -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:15.045    10:04:10 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:15.046     10:04:10 thread -- scripts/common.sh@365 -- # decimal 1
00:07:15.046     10:04:10 thread -- scripts/common.sh@353 -- # local d=1
00:07:15.046     10:04:10 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:15.046     10:04:10 thread -- scripts/common.sh@355 -- # echo 1
00:07:15.046    10:04:10 thread -- scripts/common.sh@365 -- # ver1[v]=1
00:07:15.046     10:04:10 thread -- scripts/common.sh@366 -- # decimal 2
00:07:15.046     10:04:10 thread -- scripts/common.sh@353 -- # local d=2
00:07:15.046     10:04:10 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:15.046     10:04:10 thread -- scripts/common.sh@355 -- # echo 2
00:07:15.046    10:04:10 thread -- scripts/common.sh@366 -- # ver2[v]=2
00:07:15.046    10:04:10 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:15.046    10:04:10 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:15.303    10:04:10 thread -- scripts/common.sh@368 -- # return 0
00:07:15.304    10:04:10 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:15.304    10:04:10 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:07:15.304  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:15.304  		--rc genhtml_branch_coverage=1
00:07:15.304  		--rc genhtml_function_coverage=1
00:07:15.304  		--rc genhtml_legend=1
00:07:15.304  		--rc geninfo_all_blocks=1
00:07:15.304  		--rc geninfo_unexecuted_blocks=1
00:07:15.304  		
00:07:15.304  		'
00:07:15.304    10:04:10 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:07:15.304  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:15.304  		--rc genhtml_branch_coverage=1
00:07:15.304  		--rc genhtml_function_coverage=1
00:07:15.304  		--rc genhtml_legend=1
00:07:15.304  		--rc geninfo_all_blocks=1
00:07:15.304  		--rc geninfo_unexecuted_blocks=1
00:07:15.304  		
00:07:15.304  		'
00:07:15.304    10:04:10 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:07:15.304  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:15.304  		--rc genhtml_branch_coverage=1
00:07:15.304  		--rc genhtml_function_coverage=1
00:07:15.304  		--rc genhtml_legend=1
00:07:15.304  		--rc geninfo_all_blocks=1
00:07:15.304  		--rc geninfo_unexecuted_blocks=1
00:07:15.304  		
00:07:15.304  		'
00:07:15.304    10:04:10 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:07:15.304  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:15.304  		--rc genhtml_branch_coverage=1
00:07:15.304  		--rc genhtml_function_coverage=1
00:07:15.304  		--rc genhtml_legend=1
00:07:15.304  		--rc geninfo_all_blocks=1
00:07:15.304  		--rc geninfo_unexecuted_blocks=1
00:07:15.304  		
00:07:15.304  		'
00:07:15.304   10:04:10 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:07:15.304   10:04:10 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']'
00:07:15.304   10:04:10 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:15.304   10:04:10 thread -- common/autotest_common.sh@10 -- # set +x
00:07:15.304  ************************************
00:07:15.304  START TEST thread_poller_perf
00:07:15.304  ************************************
00:07:15.304   10:04:10 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:07:15.304  [2024-11-20 10:04:10.230754] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:07:15.304  [2024-11-20 10:04:10.230896] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1751557 ]
00:07:15.304  [2024-11-20 10:04:10.370848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:15.562  [2024-11-20 10:04:10.497224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:15.562  Running 1000 pollers for 1 seconds with 1 microseconds period.
00:07:16.937  
[2024-11-20T09:04:12.058Z]  ======================================
00:07:16.937  
[2024-11-20T09:04:12.058Z]  busy:2715256581 (cyc)
00:07:16.937  
[2024-11-20T09:04:12.058Z]  total_run_count: 343000
00:07:16.937  
[2024-11-20T09:04:12.058Z]  tsc_hz: 2700000000 (cyc)
00:07:16.937  
[2024-11-20T09:04:12.058Z]  ======================================
00:07:16.937  
[2024-11-20T09:04:12.058Z]  poller_cost: 7916 (cyc), 2931 (nsec)
00:07:16.937  
00:07:16.937  real	0m1.524s
00:07:16.937  user	0m1.374s
00:07:16.937  sys	0m0.143s
00:07:16.937   10:04:11 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:16.937   10:04:11 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x
00:07:16.937  ************************************
00:07:16.937  END TEST thread_poller_perf
00:07:16.937  ************************************
00:07:16.937   10:04:11 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:07:16.937   10:04:11 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']'
00:07:16.937   10:04:11 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:16.937   10:04:11 thread -- common/autotest_common.sh@10 -- # set +x
00:07:16.937  ************************************
00:07:16.937  START TEST thread_poller_perf
00:07:16.937  ************************************
00:07:16.937   10:04:11 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:07:16.937  [2024-11-20 10:04:11.806773] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:07:16.937  [2024-11-20 10:04:11.806928] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1751712 ]
00:07:16.937  [2024-11-20 10:04:11.937163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:17.195  [2024-11-20 10:04:12.060683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:17.195  Running 1000 pollers for 1 seconds with 0 microseconds period.
00:07:18.568  
[2024-11-20T09:04:13.689Z]  ======================================
00:07:18.568  
[2024-11-20T09:04:13.689Z]  busy:2704614753 (cyc)
00:07:18.568  
[2024-11-20T09:04:13.689Z]  total_run_count: 4594000
00:07:18.568  
[2024-11-20T09:04:13.689Z]  tsc_hz: 2700000000 (cyc)
00:07:18.568  
[2024-11-20T09:04:13.689Z]  ======================================
00:07:18.568  
[2024-11-20T09:04:13.689Z]  poller_cost: 588 (cyc), 217 (nsec)
00:07:18.568  
00:07:18.568  real	0m1.508s
00:07:18.568  user	0m1.364s
00:07:18.568  sys	0m0.136s
00:07:18.568   10:04:13 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:18.568   10:04:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x
00:07:18.568  ************************************
00:07:18.568  END TEST thread_poller_perf
00:07:18.568  ************************************
00:07:18.568   10:04:13 thread -- thread/thread.sh@17 -- # [[ y != \y ]]
00:07:18.568  
00:07:18.568  real	0m3.280s
00:07:18.568  user	0m2.878s
00:07:18.568  sys	0m0.402s
00:07:18.568   10:04:13 thread -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:18.568   10:04:13 thread -- common/autotest_common.sh@10 -- # set +x
00:07:18.568  ************************************
00:07:18.568  END TEST thread
00:07:18.568  ************************************
00:07:18.568   10:04:13  -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]]
00:07:18.568   10:04:13  -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/app/cmdline.sh
00:07:18.568   10:04:13  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:18.568   10:04:13  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:18.568   10:04:13  -- common/autotest_common.sh@10 -- # set +x
00:07:18.568  ************************************
00:07:18.568  START TEST app_cmdline
00:07:18.568  ************************************
00:07:18.568   10:04:13 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/app/cmdline.sh
00:07:18.568  * Looking for test storage...
00:07:18.568  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/app
00:07:18.568    10:04:13 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:07:18.568     10:04:13 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version
00:07:18.568     10:04:13 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:07:18.568    10:04:13 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:07:18.568    10:04:13 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:18.568    10:04:13 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:18.568    10:04:13 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:18.568    10:04:13 app_cmdline -- scripts/common.sh@336 -- # IFS=.-:
00:07:18.568    10:04:13 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1
00:07:18.568    10:04:13 app_cmdline -- scripts/common.sh@337 -- # IFS=.-:
00:07:18.568    10:04:13 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2
00:07:18.568    10:04:13 app_cmdline -- scripts/common.sh@338 -- # local 'op=<'
00:07:18.568    10:04:13 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2
00:07:18.568    10:04:13 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1
00:07:18.568    10:04:13 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:18.568    10:04:13 app_cmdline -- scripts/common.sh@344 -- # case "$op" in
00:07:18.568    10:04:13 app_cmdline -- scripts/common.sh@345 -- # : 1
00:07:18.568    10:04:13 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:18.568    10:04:13 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:18.568     10:04:13 app_cmdline -- scripts/common.sh@365 -- # decimal 1
00:07:18.568     10:04:13 app_cmdline -- scripts/common.sh@353 -- # local d=1
00:07:18.568     10:04:13 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:18.568     10:04:13 app_cmdline -- scripts/common.sh@355 -- # echo 1
00:07:18.568    10:04:13 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1
00:07:18.568     10:04:13 app_cmdline -- scripts/common.sh@366 -- # decimal 2
00:07:18.568     10:04:13 app_cmdline -- scripts/common.sh@353 -- # local d=2
00:07:18.568     10:04:13 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:18.568     10:04:13 app_cmdline -- scripts/common.sh@355 -- # echo 2
00:07:18.568    10:04:13 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2
00:07:18.568    10:04:13 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:18.568    10:04:13 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:18.568    10:04:13 app_cmdline -- scripts/common.sh@368 -- # return 0
00:07:18.568    10:04:13 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:18.568    10:04:13 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:07:18.569  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:18.569  		--rc genhtml_branch_coverage=1
00:07:18.569  		--rc genhtml_function_coverage=1
00:07:18.569  		--rc genhtml_legend=1
00:07:18.569  		--rc geninfo_all_blocks=1
00:07:18.569  		--rc geninfo_unexecuted_blocks=1
00:07:18.569  		
00:07:18.569  		'
00:07:18.569    10:04:13 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:07:18.569  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:18.569  		--rc genhtml_branch_coverage=1
00:07:18.569  		--rc genhtml_function_coverage=1
00:07:18.569  		--rc genhtml_legend=1
00:07:18.569  		--rc geninfo_all_blocks=1
00:07:18.569  		--rc geninfo_unexecuted_blocks=1
00:07:18.569  		
00:07:18.569  		'
00:07:18.569    10:04:13 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:07:18.569  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:18.569  		--rc genhtml_branch_coverage=1
00:07:18.569  		--rc genhtml_function_coverage=1
00:07:18.569  		--rc genhtml_legend=1
00:07:18.569  		--rc geninfo_all_blocks=1
00:07:18.569  		--rc geninfo_unexecuted_blocks=1
00:07:18.569  		
00:07:18.569  		'
00:07:18.569    10:04:13 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:07:18.569  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:18.569  		--rc genhtml_branch_coverage=1
00:07:18.569  		--rc genhtml_function_coverage=1
00:07:18.569  		--rc genhtml_legend=1
00:07:18.569  		--rc geninfo_all_blocks=1
00:07:18.569  		--rc geninfo_unexecuted_blocks=1
00:07:18.569  		
00:07:18.569  		'
00:07:18.569   10:04:13 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT
00:07:18.569   10:04:13 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1752040
00:07:18.569   10:04:13 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods
00:07:18.569   10:04:13 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1752040
00:07:18.569   10:04:13 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1752040 ']'
00:07:18.569   10:04:13 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:18.569   10:04:13 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:18.569   10:04:13 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:18.569  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:18.569   10:04:13 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:18.569   10:04:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:07:18.569  [2024-11-20 10:04:13.595147] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:07:18.569  [2024-11-20 10:04:13.595280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1752040 ]
00:07:18.827  [2024-11-20 10:04:13.733504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:18.827  [2024-11-20 10:04:13.852514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:19.762   10:04:14 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:19.762   10:04:14 app_cmdline -- common/autotest_common.sh@868 -- # return 0
00:07:19.762   10:04:14 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py spdk_get_version
00:07:20.019  {
00:07:20.019    "version": "SPDK v25.01-pre git sha1 a5dab6cf7",
00:07:20.019    "fields": {
00:07:20.019      "major": 25,
00:07:20.019      "minor": 1,
00:07:20.019      "patch": 0,
00:07:20.019      "suffix": "-pre",
00:07:20.019      "commit": "a5dab6cf7"
00:07:20.020    }
00:07:20.020  }
00:07:20.020   10:04:14 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=()
00:07:20.020   10:04:14 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods")
00:07:20.020   10:04:14 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version")
00:07:20.020   10:04:14 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort))
00:07:20.020    10:04:14 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods
00:07:20.020    10:04:14 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:20.020    10:04:14 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]'
00:07:20.020    10:04:14 app_cmdline -- app/cmdline.sh@26 -- # sort
00:07:20.020    10:04:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:07:20.020    10:04:14 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:20.020   10:04:14 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 ))
00:07:20.020   10:04:14 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]]
00:07:20.020   10:04:14 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:07:20.020   10:04:14 app_cmdline -- common/autotest_common.sh@652 -- # local es=0
00:07:20.020   10:04:14 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:07:20.020   10:04:14 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:07:20.020   10:04:14 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:20.020    10:04:14 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:07:20.020   10:04:14 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:20.020    10:04:14 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:07:20.020   10:04:14 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:20.020   10:04:14 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:07:20.020   10:04:14 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py ]]
00:07:20.020   10:04:14 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:07:20.292  request:
00:07:20.292  {
00:07:20.293    "method": "env_dpdk_get_mem_stats",
00:07:20.293    "req_id": 1
00:07:20.293  }
00:07:20.293  Got JSON-RPC error response
00:07:20.293  response:
00:07:20.293  {
00:07:20.293    "code": -32601,
00:07:20.293    "message": "Method not found"
00:07:20.293  }
00:07:20.293   10:04:15 app_cmdline -- common/autotest_common.sh@655 -- # es=1
00:07:20.293   10:04:15 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:07:20.293   10:04:15 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:07:20.293   10:04:15 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:07:20.293   10:04:15 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1752040
00:07:20.293   10:04:15 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1752040 ']'
00:07:20.293   10:04:15 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1752040
00:07:20.293    10:04:15 app_cmdline -- common/autotest_common.sh@959 -- # uname
00:07:20.293   10:04:15 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:20.293    10:04:15 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1752040
00:07:20.293   10:04:15 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:20.293   10:04:15 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:20.293   10:04:15 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1752040'
00:07:20.293  killing process with pid 1752040
00:07:20.293   10:04:15 app_cmdline -- common/autotest_common.sh@973 -- # kill 1752040
00:07:20.293   10:04:15 app_cmdline -- common/autotest_common.sh@978 -- # wait 1752040
00:07:22.197  
00:07:22.197  real	0m3.942s
00:07:22.197  user	0m4.379s
00:07:22.198  sys	0m0.690s
00:07:22.198   10:04:17 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:22.198   10:04:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:07:22.198  ************************************
00:07:22.198  END TEST app_cmdline
00:07:22.198  ************************************
00:07:22.198   10:04:17  -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/app/version.sh
00:07:22.198   10:04:17  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:22.198   10:04:17  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:22.198   10:04:17  -- common/autotest_common.sh@10 -- # set +x
00:07:22.457  ************************************
00:07:22.457  START TEST version
00:07:22.457  ************************************
00:07:22.457   10:04:17 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/app/version.sh
00:07:22.457  * Looking for test storage...
00:07:22.457  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/app
00:07:22.457    10:04:17 version -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:07:22.457     10:04:17 version -- common/autotest_common.sh@1693 -- # lcov --version
00:07:22.457     10:04:17 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:07:22.457    10:04:17 version -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:07:22.457    10:04:17 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:22.457    10:04:17 version -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:22.457    10:04:17 version -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:22.457    10:04:17 version -- scripts/common.sh@336 -- # IFS=.-:
00:07:22.457    10:04:17 version -- scripts/common.sh@336 -- # read -ra ver1
00:07:22.457    10:04:17 version -- scripts/common.sh@337 -- # IFS=.-:
00:07:22.457    10:04:17 version -- scripts/common.sh@337 -- # read -ra ver2
00:07:22.457    10:04:17 version -- scripts/common.sh@338 -- # local 'op=<'
00:07:22.457    10:04:17 version -- scripts/common.sh@340 -- # ver1_l=2
00:07:22.457    10:04:17 version -- scripts/common.sh@341 -- # ver2_l=1
00:07:22.457    10:04:17 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:22.457    10:04:17 version -- scripts/common.sh@344 -- # case "$op" in
00:07:22.457    10:04:17 version -- scripts/common.sh@345 -- # : 1
00:07:22.457    10:04:17 version -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:22.457    10:04:17 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:22.457     10:04:17 version -- scripts/common.sh@365 -- # decimal 1
00:07:22.457     10:04:17 version -- scripts/common.sh@353 -- # local d=1
00:07:22.457     10:04:17 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:22.457     10:04:17 version -- scripts/common.sh@355 -- # echo 1
00:07:22.457    10:04:17 version -- scripts/common.sh@365 -- # ver1[v]=1
00:07:22.457     10:04:17 version -- scripts/common.sh@366 -- # decimal 2
00:07:22.457     10:04:17 version -- scripts/common.sh@353 -- # local d=2
00:07:22.457     10:04:17 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:22.457     10:04:17 version -- scripts/common.sh@355 -- # echo 2
00:07:22.457    10:04:17 version -- scripts/common.sh@366 -- # ver2[v]=2
00:07:22.457    10:04:17 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:22.457    10:04:17 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:22.457    10:04:17 version -- scripts/common.sh@368 -- # return 0
00:07:22.457    10:04:17 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:22.457    10:04:17 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:07:22.457  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:22.457  		--rc genhtml_branch_coverage=1
00:07:22.457  		--rc genhtml_function_coverage=1
00:07:22.457  		--rc genhtml_legend=1
00:07:22.457  		--rc geninfo_all_blocks=1
00:07:22.457  		--rc geninfo_unexecuted_blocks=1
00:07:22.457  		
00:07:22.457  		'
00:07:22.457    10:04:17 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:07:22.457  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:22.457  		--rc genhtml_branch_coverage=1
00:07:22.457  		--rc genhtml_function_coverage=1
00:07:22.457  		--rc genhtml_legend=1
00:07:22.457  		--rc geninfo_all_blocks=1
00:07:22.457  		--rc geninfo_unexecuted_blocks=1
00:07:22.457  		
00:07:22.457  		'
00:07:22.457    10:04:17 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:07:22.457  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:22.457  		--rc genhtml_branch_coverage=1
00:07:22.457  		--rc genhtml_function_coverage=1
00:07:22.457  		--rc genhtml_legend=1
00:07:22.457  		--rc geninfo_all_blocks=1
00:07:22.457  		--rc geninfo_unexecuted_blocks=1
00:07:22.457  		
00:07:22.457  		'
00:07:22.457    10:04:17 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:07:22.457  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:22.457  		--rc genhtml_branch_coverage=1
00:07:22.457  		--rc genhtml_function_coverage=1
00:07:22.457  		--rc genhtml_legend=1
00:07:22.457  		--rc geninfo_all_blocks=1
00:07:22.457  		--rc geninfo_unexecuted_blocks=1
00:07:22.457  		
00:07:22.457  		'
00:07:22.457    10:04:17 version -- app/version.sh@17 -- # get_header_version major
00:07:22.457    10:04:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/include/spdk/version.h
00:07:22.457    10:04:17 version -- app/version.sh@14 -- # cut -f2
00:07:22.457    10:04:17 version -- app/version.sh@14 -- # tr -d '"'
00:07:22.457   10:04:17 version -- app/version.sh@17 -- # major=25
00:07:22.457    10:04:17 version -- app/version.sh@18 -- # get_header_version minor
00:07:22.457    10:04:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/include/spdk/version.h
00:07:22.457    10:04:17 version -- app/version.sh@14 -- # cut -f2
00:07:22.457    10:04:17 version -- app/version.sh@14 -- # tr -d '"'
00:07:22.457   10:04:17 version -- app/version.sh@18 -- # minor=1
00:07:22.457    10:04:17 version -- app/version.sh@19 -- # get_header_version patch
00:07:22.457    10:04:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/include/spdk/version.h
00:07:22.457    10:04:17 version -- app/version.sh@14 -- # cut -f2
00:07:22.457    10:04:17 version -- app/version.sh@14 -- # tr -d '"'
00:07:22.457   10:04:17 version -- app/version.sh@19 -- # patch=0
00:07:22.457    10:04:17 version -- app/version.sh@20 -- # get_header_version suffix
00:07:22.457    10:04:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/include/spdk/version.h
00:07:22.457    10:04:17 version -- app/version.sh@14 -- # cut -f2
00:07:22.457    10:04:17 version -- app/version.sh@14 -- # tr -d '"'
00:07:22.457   10:04:17 version -- app/version.sh@20 -- # suffix=-pre
00:07:22.457   10:04:17 version -- app/version.sh@22 -- # version=25.1
00:07:22.457   10:04:17 version -- app/version.sh@25 -- # (( patch != 0 ))
00:07:22.457   10:04:17 version -- app/version.sh@28 -- # version=25.1rc0
00:07:22.457   10:04:17 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python
00:07:22.457    10:04:17 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)'
00:07:22.457   10:04:17 version -- app/version.sh@30 -- # py_version=25.1rc0
00:07:22.457   10:04:17 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]]
00:07:22.457  
00:07:22.457  real	0m0.196s
00:07:22.457  user	0m0.125s
00:07:22.457  sys	0m0.097s
00:07:22.457   10:04:17 version -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:22.457   10:04:17 version -- common/autotest_common.sh@10 -- # set +x
00:07:22.457  ************************************
00:07:22.457  END TEST version
00:07:22.457  ************************************
00:07:22.457   10:04:17  -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']'
00:07:22.458   10:04:17  -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]]
00:07:22.458    10:04:17  -- spdk/autotest.sh@194 -- # uname -s
00:07:22.458   10:04:17  -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]]
00:07:22.458   10:04:17  -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]]
00:07:22.458   10:04:17  -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]]
00:07:22.458   10:04:17  -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']'
00:07:22.458   10:04:17  -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']'
00:07:22.458   10:04:17  -- spdk/autotest.sh@260 -- # timing_exit lib
00:07:22.458   10:04:17  -- common/autotest_common.sh@732 -- # xtrace_disable
00:07:22.458   10:04:17  -- common/autotest_common.sh@10 -- # set +x
00:07:22.717   10:04:17  -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']'
00:07:22.717   10:04:17  -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']'
00:07:22.717   10:04:17  -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']'
00:07:22.717   10:04:17  -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']'
00:07:22.717   10:04:17  -- spdk/autotest.sh@315 -- # '[' 1 -eq 1 ']'
00:07:22.717   10:04:17  -- spdk/autotest.sh@316 -- # HUGENODE=0
00:07:22.717   10:04:17  -- spdk/autotest.sh@316 -- # run_test vfio_user_qemu /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/vfio_user.sh --iso
00:07:22.717   10:04:17  -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:07:22.717   10:04:17  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:22.717   10:04:17  -- common/autotest_common.sh@10 -- # set +x
00:07:22.717  ************************************
00:07:22.717  START TEST vfio_user_qemu
00:07:22.717  ************************************
00:07:22.717   10:04:17 vfio_user_qemu -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/vfio_user.sh --iso
00:07:22.717  * Looking for test storage...
00:07:22.717  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user
00:07:22.717    10:04:17 vfio_user_qemu -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:07:22.717     10:04:17 vfio_user_qemu -- common/autotest_common.sh@1693 -- # lcov --version
00:07:22.717     10:04:17 vfio_user_qemu -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:07:22.717    10:04:17 vfio_user_qemu -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:07:22.717    10:04:17 vfio_user_qemu -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:22.717    10:04:17 vfio_user_qemu -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:22.717    10:04:17 vfio_user_qemu -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:22.717    10:04:17 vfio_user_qemu -- scripts/common.sh@336 -- # IFS=.-:
00:07:22.717    10:04:17 vfio_user_qemu -- scripts/common.sh@336 -- # read -ra ver1
00:07:22.717    10:04:17 vfio_user_qemu -- scripts/common.sh@337 -- # IFS=.-:
00:07:22.717    10:04:17 vfio_user_qemu -- scripts/common.sh@337 -- # read -ra ver2
00:07:22.717    10:04:17 vfio_user_qemu -- scripts/common.sh@338 -- # local 'op=<'
00:07:22.717    10:04:17 vfio_user_qemu -- scripts/common.sh@340 -- # ver1_l=2
00:07:22.717    10:04:17 vfio_user_qemu -- scripts/common.sh@341 -- # ver2_l=1
00:07:22.717    10:04:17 vfio_user_qemu -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:22.717    10:04:17 vfio_user_qemu -- scripts/common.sh@344 -- # case "$op" in
00:07:22.717    10:04:17 vfio_user_qemu -- scripts/common.sh@345 -- # : 1
00:07:22.717    10:04:17 vfio_user_qemu -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:22.717    10:04:17 vfio_user_qemu -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:22.717     10:04:17 vfio_user_qemu -- scripts/common.sh@365 -- # decimal 1
00:07:22.717     10:04:17 vfio_user_qemu -- scripts/common.sh@353 -- # local d=1
00:07:22.717     10:04:17 vfio_user_qemu -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:22.717     10:04:17 vfio_user_qemu -- scripts/common.sh@355 -- # echo 1
00:07:22.717    10:04:17 vfio_user_qemu -- scripts/common.sh@365 -- # ver1[v]=1
00:07:22.717     10:04:17 vfio_user_qemu -- scripts/common.sh@366 -- # decimal 2
00:07:22.717     10:04:17 vfio_user_qemu -- scripts/common.sh@353 -- # local d=2
00:07:22.717     10:04:17 vfio_user_qemu -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:22.717     10:04:17 vfio_user_qemu -- scripts/common.sh@355 -- # echo 2
00:07:22.717    10:04:17 vfio_user_qemu -- scripts/common.sh@366 -- # ver2[v]=2
00:07:22.717    10:04:17 vfio_user_qemu -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:22.717    10:04:17 vfio_user_qemu -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:22.717    10:04:17 vfio_user_qemu -- scripts/common.sh@368 -- # return 0
00:07:22.717    10:04:17 vfio_user_qemu -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:22.717    10:04:17 vfio_user_qemu -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:07:22.717  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:22.717  		--rc genhtml_branch_coverage=1
00:07:22.717  		--rc genhtml_function_coverage=1
00:07:22.717  		--rc genhtml_legend=1
00:07:22.717  		--rc geninfo_all_blocks=1
00:07:22.717  		--rc geninfo_unexecuted_blocks=1
00:07:22.717  		
00:07:22.717  		'
00:07:22.717    10:04:17 vfio_user_qemu -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:07:22.717  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:22.717  		--rc genhtml_branch_coverage=1
00:07:22.717  		--rc genhtml_function_coverage=1
00:07:22.717  		--rc genhtml_legend=1
00:07:22.717  		--rc geninfo_all_blocks=1
00:07:22.717  		--rc geninfo_unexecuted_blocks=1
00:07:22.717  		
00:07:22.717  		'
00:07:22.717    10:04:17 vfio_user_qemu -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:07:22.717  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:22.717  		--rc genhtml_branch_coverage=1
00:07:22.717  		--rc genhtml_function_coverage=1
00:07:22.717  		--rc genhtml_legend=1
00:07:22.717  		--rc geninfo_all_blocks=1
00:07:22.717  		--rc geninfo_unexecuted_blocks=1
00:07:22.717  		
00:07:22.717  		'
00:07:22.717    10:04:17 vfio_user_qemu -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:07:22.717  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:22.718  		--rc genhtml_branch_coverage=1
00:07:22.718  		--rc genhtml_function_coverage=1
00:07:22.718  		--rc genhtml_legend=1
00:07:22.718  		--rc geninfo_all_blocks=1
00:07:22.718  		--rc geninfo_unexecuted_blocks=1
00:07:22.718  		
00:07:22.718  		'
00:07:22.718   10:04:17 vfio_user_qemu -- vfio_user/vfio_user.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh
00:07:22.718    10:04:17 vfio_user_qemu -- vfio_user/common.sh@6 -- # : 128
00:07:22.718    10:04:17 vfio_user_qemu -- vfio_user/common.sh@7 -- # : 512
00:07:22.718    10:04:17 vfio_user_qemu -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:07:22.718     10:04:17 vfio_user_qemu -- vhost/common.sh@6 -- # : false
00:07:22.718     10:04:17 vfio_user_qemu -- vhost/common.sh@7 -- # : /root/vhost_test
00:07:22.718     10:04:17 vfio_user_qemu -- vhost/common.sh@8 -- # : /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:07:22.718     10:04:17 vfio_user_qemu -- vhost/common.sh@9 -- # : qemu-img
00:07:22.718      10:04:17 vfio_user_qemu -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:07:22.718     10:04:17 vfio_user_qemu -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:07:22.718     10:04:17 vfio_user_qemu -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:07:22.718     10:04:17 vfio_user_qemu -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:07:22.718     10:04:17 vfio_user_qemu -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:07:22.718     10:04:17 vfio_user_qemu -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:07:22.718     10:04:17 vfio_user_qemu -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:07:22.718       10:04:17 vfio_user_qemu -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/vfio_user.sh
00:07:22.718      10:04:17 vfio_user_qemu -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user
00:07:22.718     10:04:17 vfio_user_qemu -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user
00:07:22.718     10:04:17 vfio_user_qemu -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:07:22.718     10:04:17 vfio_user_qemu -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:07:22.718     10:04:17 vfio_user_qemu -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:07:22.718     10:04:17 vfio_user_qemu -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:07:22.718     10:04:17 vfio_user_qemu -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:07:22.718      10:04:17 vfio_user_qemu -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:07:22.718      10:04:17 vfio_user_qemu -- common/autotest.config@2 -- # vhost_0_main_core=0
00:07:22.718      10:04:17 vfio_user_qemu -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:07:22.718      10:04:17 vfio_user_qemu -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:07:22.718      10:04:17 vfio_user_qemu -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:07:22.718      10:04:17 vfio_user_qemu -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:07:22.718      10:04:17 vfio_user_qemu -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:07:22.718      10:04:17 vfio_user_qemu -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:07:22.718      10:04:17 vfio_user_qemu -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:07:22.718      10:04:17 vfio_user_qemu -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:07:22.718      10:04:17 vfio_user_qemu -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:07:22.718      10:04:17 vfio_user_qemu -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:07:22.718      10:04:17 vfio_user_qemu -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:07:22.718      10:04:17 vfio_user_qemu -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:07:22.718      10:04:17 vfio_user_qemu -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:07:22.718      10:04:17 vfio_user_qemu -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:07:22.718      10:04:17 vfio_user_qemu -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:07:22.718      10:04:17 vfio_user_qemu -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:07:22.718      10:04:17 vfio_user_qemu -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:07:22.718      10:04:17 vfio_user_qemu -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:07:22.718      10:04:17 vfio_user_qemu -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:07:22.718      10:04:17 vfio_user_qemu -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:07:22.718      10:04:17 vfio_user_qemu -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:07:22.718      10:04:17 vfio_user_qemu -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:07:22.718      10:04:17 vfio_user_qemu -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:07:22.718      10:04:17 vfio_user_qemu -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:07:22.718     10:04:17 vfio_user_qemu -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:07:22.718      10:04:17 vfio_user_qemu -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:07:22.718      10:04:17 vfio_user_qemu -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:07:22.718      10:04:17 vfio_user_qemu -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:07:22.718      10:04:17 vfio_user_qemu -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:07:22.718      10:04:17 vfio_user_qemu -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:07:22.718      10:04:17 vfio_user_qemu -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:07:22.718       10:04:17 vfio_user_qemu -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:07:22.718        10:04:17 vfio_user_qemu -- scheduler/cgroups.sh@244 -- # check_cgroup
00:07:22.718        10:04:17 vfio_user_qemu -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:07:22.718        10:04:17 vfio_user_qemu -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:07:22.718        10:04:17 vfio_user_qemu -- scheduler/cgroups.sh@10 -- # echo 2
00:07:22.718       10:04:17 vfio_user_qemu -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:07:22.718    10:04:17 vfio_user_qemu -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:07:22.718    10:04:17 vfio_user_qemu -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]]
00:07:22.718    10:04:17 vfio_user_qemu -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:07:22.718   10:04:17 vfio_user_qemu -- vfio_user/vfio_user.sh@11 -- # echo 'Running SPDK vfio-user fio autotest...'
00:07:22.718  Running SPDK vfio-user fio autotest...
00:07:22.718   10:04:17 vfio_user_qemu -- vfio_user/vfio_user.sh@13 -- # vhosttestinit
00:07:22.718   10:04:17 vfio_user_qemu -- vhost/common.sh@37 -- # '[' iso == iso ']'
00:07:22.718   10:04:17 vfio_user_qemu -- vhost/common.sh@38 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh
00:07:24.122  0000:00:04.7 (8086 0e27): Already using the vfio-pci driver
00:07:24.122  0000:00:04.6 (8086 0e26): Already using the vfio-pci driver
00:07:24.122  0000:00:04.5 (8086 0e25): Already using the vfio-pci driver
00:07:24.122  0000:00:04.4 (8086 0e24): Already using the vfio-pci driver
00:07:24.122  0000:00:04.3 (8086 0e23): Already using the vfio-pci driver
00:07:24.122  0000:00:04.2 (8086 0e22): Already using the vfio-pci driver
00:07:24.122  0000:00:04.1 (8086 0e21): Already using the vfio-pci driver
00:07:24.122  0000:00:04.0 (8086 0e20): Already using the vfio-pci driver
00:07:24.122  0000:80:04.7 (8086 0e27): Already using the vfio-pci driver
00:07:24.122  0000:80:04.6 (8086 0e26): Already using the vfio-pci driver
00:07:24.122  0000:80:04.5 (8086 0e25): Already using the vfio-pci driver
00:07:24.122  0000:80:04.4 (8086 0e24): Already using the vfio-pci driver
00:07:24.122  0000:80:04.3 (8086 0e23): Already using the vfio-pci driver
00:07:24.122  0000:80:04.2 (8086 0e22): Already using the vfio-pci driver
00:07:24.122  0000:80:04.1 (8086 0e21): Already using the vfio-pci driver
00:07:24.122  0000:80:04.0 (8086 0e20): Already using the vfio-pci driver
00:07:24.122  0000:85:00.0 (8086 0a54): Already using the vfio-pci driver
00:07:24.408   10:04:19 vfio_user_qemu -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]]
00:07:24.408   10:04:19 vfio_user_qemu -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:07:24.408   10:04:19 vfio_user_qemu -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:07:24.408   10:04:19 vfio_user_qemu -- vfio_user/vfio_user.sh@15 -- # run_test vfio_user_nvme_fio /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/vfio_user_fio.sh
00:07:24.408   10:04:19 vfio_user_qemu -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:24.408   10:04:19 vfio_user_qemu -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:24.408   10:04:19 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x
00:07:24.408  ************************************
00:07:24.408  START TEST vfio_user_nvme_fio
00:07:24.408  ************************************
00:07:24.408   10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/vfio_user_fio.sh
00:07:24.408  * Looking for test storage...
00:07:24.408  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme
00:07:24.408    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:07:24.408     10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1693 -- # lcov --version
00:07:24.408     10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:07:24.408    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:07:24.408    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:24.408    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:24.408    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:24.408    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@336 -- # IFS=.-:
00:07:24.408    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@336 -- # read -ra ver1
00:07:24.408    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@337 -- # IFS=.-:
00:07:24.408    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@337 -- # read -ra ver2
00:07:24.408    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@338 -- # local 'op=<'
00:07:24.408    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@340 -- # ver1_l=2
00:07:24.408    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@341 -- # ver2_l=1
00:07:24.408    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:24.408    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@344 -- # case "$op" in
00:07:24.408    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@345 -- # : 1
00:07:24.408    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:24.408    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:24.408     10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@365 -- # decimal 1
00:07:24.408     10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@353 -- # local d=1
00:07:24.408     10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:24.408     10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@355 -- # echo 1
00:07:24.408    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@365 -- # ver1[v]=1
00:07:24.408     10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@366 -- # decimal 2
00:07:24.408     10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@353 -- # local d=2
00:07:24.408     10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:24.408     10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@355 -- # echo 2
00:07:24.408    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@366 -- # ver2[v]=2
00:07:24.408    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:24.408    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:24.408    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@368 -- # return 0
00:07:24.408    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:24.408    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:07:24.408  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:24.408  		--rc genhtml_branch_coverage=1
00:07:24.408  		--rc genhtml_function_coverage=1
00:07:24.408  		--rc genhtml_legend=1
00:07:24.408  		--rc geninfo_all_blocks=1
00:07:24.408  		--rc geninfo_unexecuted_blocks=1
00:07:24.408  		
00:07:24.408  		'
00:07:24.408    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:07:24.408  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:24.408  		--rc genhtml_branch_coverage=1
00:07:24.408  		--rc genhtml_function_coverage=1
00:07:24.408  		--rc genhtml_legend=1
00:07:24.408  		--rc geninfo_all_blocks=1
00:07:24.408  		--rc geninfo_unexecuted_blocks=1
00:07:24.408  		
00:07:24.408  		'
00:07:24.408    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:07:24.408  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:24.408  		--rc genhtml_branch_coverage=1
00:07:24.408  		--rc genhtml_function_coverage=1
00:07:24.408  		--rc genhtml_legend=1
00:07:24.408  		--rc geninfo_all_blocks=1
00:07:24.408  		--rc geninfo_unexecuted_blocks=1
00:07:24.408  		
00:07:24.408  		'
00:07:24.408    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:07:24.408  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:24.408  		--rc genhtml_branch_coverage=1
00:07:24.408  		--rc genhtml_function_coverage=1
00:07:24.408  		--rc genhtml_legend=1
00:07:24.408  		--rc geninfo_all_blocks=1
00:07:24.408  		--rc geninfo_unexecuted_blocks=1
00:07:24.408  		
00:07:24.408  		'
00:07:24.408   10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh
00:07:24.408    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/common.sh@6 -- # : 128
00:07:24.408    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/common.sh@7 -- # : 512
00:07:24.408    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:07:24.408     10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@6 -- # : false
00:07:24.408     10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@7 -- # : /root/vhost_test
00:07:24.408     10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@8 -- # : /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:07:24.408     10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@9 -- # : qemu-img
00:07:24.408      10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:07:24.408     10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:07:24.408     10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:07:24.408     10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:07:24.408     10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:07:24.408     10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:07:24.408     10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:07:24.408       10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/vfio_user_fio.sh
00:07:24.408      10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme
00:07:24.408     10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme
00:07:24.408     10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:07:24.408     10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:07:24.408     10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:07:24.408     10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:07:24.408     10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:07:24.408      10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:07:24.408      10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@2 -- # vhost_0_main_core=0
00:07:24.408      10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:07:24.408      10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:07:24.408      10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:07:24.408      10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:07:24.408      10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:07:24.408      10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:07:24.408      10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:07:24.408      10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:07:24.408      10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:07:24.408      10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:07:24.408      10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:07:24.408      10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:07:24.408      10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:07:24.408      10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:07:24.408      10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:07:24.409      10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:07:24.409      10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:07:24.409      10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:07:24.409      10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:07:24.409      10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:07:24.409      10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:07:24.409      10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:07:24.409      10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:07:24.409      10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:07:24.409     10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:07:24.409      10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:07:24.409      10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:07:24.409      10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:07:24.409      10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:07:24.409      10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:07:24.409      10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:07:24.409       10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:07:24.409        10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/cgroups.sh@244 -- # check_cgroup
00:07:24.409        10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:07:24.409        10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:07:24.409        10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/cgroups.sh@10 -- # echo 2
00:07:24.409       10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:07:24.409    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:07:24.409    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]]
00:07:24.409    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:07:24.409   10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/common.sh
00:07:24.409   10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/autotest.config
00:07:24.409    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@1 -- # vhost_0_reactor_mask='[0-3]'
00:07:24.409    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@2 -- # vhost_0_main_core=0
00:07:24.409    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@4 -- # VM_0_qemu_mask=4-5
00:07:24.409    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:07:24.409    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@7 -- # VM_1_qemu_mask=6-7
00:07:24.409    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:07:24.409    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@10 -- # VM_2_qemu_mask=8-9
00:07:24.409    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:07:24.409    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@13 -- # get_vhost_dir 0
00:07:24.409    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@105 -- # local vhost_name=0
00:07:24.409    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:07:24.409    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:07:24.409   10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@13 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:07:24.409   10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@15 -- # fio_bin=--fio-bin=/usr/src/fio-static/fio
00:07:24.409   10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@16 -- # vm_no=2
00:07:24.409   10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@18 -- # trap clean_vfio_user EXIT
00:07:24.409   10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@19 -- # vhosttestinit
00:07:24.409   10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@37 -- # '[' '' == iso ']'
00:07:24.409   10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]]
00:07:24.409   10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:07:24.409   10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:07:24.409   10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@21 -- # timing_enter start_vfio_user
00:07:24.409   10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:07:24.409   10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:07:24.409   10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@22 -- # vfio_user_run 0
00:07:24.409   10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@11 -- # local vhost_name=0
00:07:24.409   10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@12 -- # local vfio_user_dir nvmf_pid_file rpc_py
00:07:24.409    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@14 -- # get_vhost_dir 0
00:07:24.409    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@105 -- # local vhost_name=0
00:07:24.409    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:07:24.409    10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:07:24.409   10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@14 -- # vfio_user_dir=/root/vhost_test/vhost/0
00:07:24.409   10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@15 -- # nvmf_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:07:24.409   10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@16 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:07:24.409   10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@18 -- # mkdir -p /root/vhost_test/vhost/0
00:07:24.409   10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@20 -- # timing_enter vfio_user_start
00:07:24.409   10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:07:24.409   10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:07:24.409   10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@22 -- # nvmfpid=1753458
00:07:24.409   10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/nvmf_tgt -r /root/vhost_test/vhost/0/rpc.sock -m 0xf -s 512
00:07:24.409   10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@23 -- # echo 1753458
00:07:24.409   10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@25 -- # echo 'Process pid: 1753458'
00:07:24.409  Process pid: 1753458
00:07:24.409   10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@26 -- # echo 'waiting for app to run...'
00:07:24.409  waiting for app to run...
00:07:24.409   10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@27 -- # waitforlisten 1753458 /root/vhost_test/vhost/0/rpc.sock
00:07:24.409   10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@835 -- # '[' -z 1753458 ']'
00:07:24.409   10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@839 -- # local rpc_addr=/root/vhost_test/vhost/0/rpc.sock
00:07:24.409   10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:24.409   10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...'
00:07:24.409  Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...
00:07:24.409   10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:24.409   10:04:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:07:24.668  [2024-11-20 10:04:19.557151] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:07:24.668  [2024-11-20 10:04:19.557321] [ DPDK EAL parameters: nvmf --no-shconf -c 0xf -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1753458 ]
00:07:24.668  EAL: No free 2048 kB hugepages reported on node 1
00:07:24.927  [2024-11-20 10:04:19.944823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:07:25.185  [2024-11-20 10:04:20.068771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:07:25.185  [2024-11-20 10:04:20.068830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:07:25.185  [2024-11-20 10:04:20.068869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:25.185  [2024-11-20 10:04:20.068880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:07:25.443   10:04:20 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:25.443   10:04:20 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@868 -- # return 0
00:07:25.443   10:04:20 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@29 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_create_transport -t VFIOUSER
00:07:25.701   10:04:20 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@30 -- # timing_exit vfio_user_start
00:07:25.701   10:04:20 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:07:25.701   10:04:20 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:07:25.701    10:04:20 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@27 -- # seq 0 2
00:07:25.701   10:04:20 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@27 -- # for i in $(seq 0 $vm_no)
00:07:25.701   10:04:20 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@28 -- # vm_muser_dir=/root/vhost_test/vms/0/muser
00:07:25.701   10:04:20 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@29 -- # rm -rf /root/vhost_test/vms/0/muser
00:07:25.701   10:04:20 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@30 -- # mkdir -p /root/vhost_test/vms/0/muser/domain/muser0/0
00:07:25.701   10:04:20 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@32 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_create_subsystem nqn.2019-07.io.spdk:cnode0 -s SPDK000 -a
00:07:25.959   10:04:21 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@33 -- # (( i == vm_no ))
00:07:25.959   10:04:21 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_malloc_create 128 512 -b Malloc0
00:07:26.525  Malloc0
00:07:26.525   10:04:21 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@38 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode0 Malloc0
00:07:26.783   10:04:21 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@40 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode0 -t VFIOUSER -a /root/vhost_test/vms/0/muser/domain/muser0/0 -s 0
00:07:27.041   10:04:21 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@27 -- # for i in $(seq 0 $vm_no)
00:07:27.041   10:04:21 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@28 -- # vm_muser_dir=/root/vhost_test/vms/1/muser
00:07:27.041   10:04:21 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@29 -- # rm -rf /root/vhost_test/vms/1/muser
00:07:27.041   10:04:21 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@30 -- # mkdir -p /root/vhost_test/vms/1/muser/domain/muser1/1
00:07:27.041   10:04:21 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@32 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -s SPDK001 -a
00:07:27.299   10:04:22 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@33 -- # (( i == vm_no ))
00:07:27.300   10:04:22 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_malloc_create 128 512 -b Malloc1
00:07:27.558  Malloc1
00:07:27.558   10:04:22 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@38 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1
00:07:27.816   10:04:22 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@40 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /root/vhost_test/vms/1/muser/domain/muser1/1 -s 0
00:07:28.074   10:04:23 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@27 -- # for i in $(seq 0 $vm_no)
00:07:28.074   10:04:23 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@28 -- # vm_muser_dir=/root/vhost_test/vms/2/muser
00:07:28.074   10:04:23 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@29 -- # rm -rf /root/vhost_test/vms/2/muser
00:07:28.332   10:04:23 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@30 -- # mkdir -p /root/vhost_test/vms/2/muser/domain/muser2/2
00:07:28.332   10:04:23 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@32 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -s SPDK002 -a
00:07:28.590   10:04:23 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@33 -- # (( i == vm_no ))
00:07:28.590   10:04:23 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/gen_nvme.sh
00:07:28.590   10:04:23 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock load_subsystem_config
00:07:31.872   10:04:26 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@35 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Nvme0n1
00:07:31.872   10:04:26 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@40 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /root/vhost_test/vms/2/muser/domain/muser2/2 -s 0
00:07:32.130   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@43 -- # timing_exit start_vfio_user
00:07:32.130   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:07:32.130   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:07:32.131   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@45 -- # used_vms=
00:07:32.131   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@46 -- # timing_enter launch_vms
00:07:32.131   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:07:32.131   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:07:32.131    10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@47 -- # seq 0 2
00:07:32.131   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@47 -- # for i in $(seq 0 $vm_no)
00:07:32.131   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@48 -- # vm_setup --disk-type=vfio_user --force=0 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --memory=768 --disks=0
00:07:32.131   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@518 -- # xtrace_disable
00:07:32.131   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:07:32.131  WARN: removing existing VM in '/root/vhost_test/vms/0'
00:07:32.131  INFO: Creating new VM in /root/vhost_test/vms/0
00:07:32.131  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:07:32.131  INFO: TASK MASK: 4-5
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@671 -- # local node_num=0
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@672 -- # local boot_disk_present=false
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:07:32.391  INFO: NUMA NODE: 0
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@677 -- # [[ -n '' ]]
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@686 -- # [[ -z '' ]]
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@701 -- # IFS=,
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@701 -- # read -r disk disk_type _
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@702 -- # [[ -z '' ]]
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@702 -- # disk_type=vfio_user
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@704 -- # case $disk_type in
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@758 -- # notice 'using socket /root/vhost_test/vms/0/domain/muser0/0/cntrl'
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/0/domain/muser0/0/cntrl'
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/0/domain/muser0/0/cntrl'
00:07:32.391  INFO: using socket /root/vhost_test/vms/0/domain/muser0/0/cntrl
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@759 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/$vm_num/muser/domain/muser$disk/$disk/cntrl")
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@760 -- # [[ 0 == '' ]]
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@780 -- # [[ -n '' ]]
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@785 -- # (( 0 ))
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/0/run.sh'
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/0/run.sh'
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/0/run.sh'
00:07:32.391  INFO: Saving to /root/vhost_test/vms/0/run.sh
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@787 -- # cat
00:07:32.391    10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 4-5 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 768 --enable-kvm -cpu host -smp 2 -vga std -vnc :100 -daemonize -object memory-backend-file,id=mem,size=768M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10002,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/0/qemu.pid -serial file:/root/vhost_test/vms/0/serial.log -D /root/vhost_test/vms/0/qemu.log -chardev file,path=/root/vhost_test/vms/0/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10000-:22,hostfwd=tcp::10001-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/0/muser/domain/muser0/0/cntrl
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/0/run.sh
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@827 -- # echo 10000
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@828 -- # echo 10001
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@829 -- # echo 10002
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/0/migration_port
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@832 -- # [[ -z '' ]]
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@834 -- # echo 10004
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@835 -- # echo 100
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@837 -- # [[ -z '' ]]
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@838 -- # [[ -z '' ]]
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@49 -- # used_vms+=' 0'
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@47 -- # for i in $(seq 0 $vm_no)
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@48 -- # vm_setup --disk-type=vfio_user --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --memory=768 --disks=1
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@518 -- # xtrace_disable
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:07:32.391  WARN: removing existing VM in '/root/vhost_test/vms/1'
00:07:32.391  INFO: Creating new VM in /root/vhost_test/vms/1
00:07:32.391  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:07:32.391  INFO: TASK MASK: 6-7
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@671 -- # local node_num=0
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@672 -- # local boot_disk_present=false
00:07:32.391   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:07:32.392  INFO: NUMA NODE: 0
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@677 -- # [[ -n '' ]]
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@686 -- # [[ -z '' ]]
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@701 -- # IFS=,
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@701 -- # read -r disk disk_type _
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@702 -- # [[ -z '' ]]
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@702 -- # disk_type=vfio_user
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@704 -- # case $disk_type in
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@758 -- # notice 'using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:07:32.392  INFO: using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@759 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/$vm_num/muser/domain/muser$disk/$disk/cntrl")
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@760 -- # [[ 1 == '' ]]
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@780 -- # [[ -n '' ]]
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@785 -- # (( 0 ))
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:07:32.392  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@787 -- # cat
00:07:32.392    10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 768 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=768M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/1/muser/domain/muser1/1/cntrl
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@827 -- # echo 10100
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@828 -- # echo 10101
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@829 -- # echo 10102
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@832 -- # [[ -z '' ]]
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@834 -- # echo 10104
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@835 -- # echo 101
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@837 -- # [[ -z '' ]]
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@838 -- # [[ -z '' ]]
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@49 -- # used_vms+=' 1'
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@47 -- # for i in $(seq 0 $vm_no)
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@48 -- # vm_setup --disk-type=vfio_user --force=2 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --memory=768 --disks=2
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@518 -- # xtrace_disable
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:07:32.392  WARN: removing existing VM in '/root/vhost_test/vms/2'
00:07:32.392  INFO: Creating new VM in /root/vhost_test/vms/2
00:07:32.392  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:07:32.392  INFO: TASK MASK: 8-9
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@671 -- # local node_num=0
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@672 -- # local boot_disk_present=false
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:07:32.392  INFO: NUMA NODE: 0
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@677 -- # [[ -n '' ]]
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@686 -- # [[ -z '' ]]
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@701 -- # IFS=,
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@701 -- # read -r disk disk_type _
00:07:32.392   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@702 -- # [[ -z '' ]]
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@702 -- # disk_type=vfio_user
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@704 -- # case $disk_type in
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@758 -- # notice 'using socket /root/vhost_test/vms/2/domain/muser2/2/cntrl'
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/2/domain/muser2/2/cntrl'
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/2/domain/muser2/2/cntrl'
00:07:32.393  INFO: using socket /root/vhost_test/vms/2/domain/muser2/2/cntrl
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@759 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/$vm_num/muser/domain/muser$disk/$disk/cntrl")
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@760 -- # [[ 2 == '' ]]
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@780 -- # [[ -n '' ]]
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@785 -- # (( 0 ))
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/2/run.sh'
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/2/run.sh'
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/2/run.sh'
00:07:32.393  INFO: Saving to /root/vhost_test/vms/2/run.sh
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@787 -- # cat
00:07:32.393    10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 8-9 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 768 --enable-kvm -cpu host -smp 2 -vga std -vnc :102 -daemonize -object memory-backend-file,id=mem,size=768M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10202,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/2/qemu.pid -serial file:/root/vhost_test/vms/2/serial.log -D /root/vhost_test/vms/2/qemu.log -chardev file,path=/root/vhost_test/vms/2/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10200-:22,hostfwd=tcp::10201-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/2/muser/domain/muser2/2/cntrl
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/2/run.sh
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@827 -- # echo 10200
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@828 -- # echo 10201
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@829 -- # echo 10202
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/2/migration_port
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@832 -- # [[ -z '' ]]
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@834 -- # echo 10204
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@835 -- # echo 102
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@837 -- # [[ -z '' ]]
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@838 -- # [[ -z '' ]]
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@49 -- # used_vms+=' 2'
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@52 -- # vm_run 0 1 2
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@843 -- # local run_all=false
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@844 -- # local vms_to_run=
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@846 -- # getopts a-: optchar
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@856 -- # false
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@859 -- # shift 0
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@860 -- # for vm in "$@"
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@861 -- # vm_num_is_valid 0
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/0/run.sh ]]
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@866 -- # vms_to_run+=' 0'
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@860 -- # for vm in "$@"
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@861 -- # vm_num_is_valid 0
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@860 -- # for vm in "$@"
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@861 -- # vm_num_is_valid 0
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/2/run.sh ]]
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@866 -- # vms_to_run+=' 2'
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@871 -- # vm_is_running 0
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@373 -- # return 1
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/0/run.sh'
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/0/run.sh'
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/0/run.sh'
00:07:32.393  INFO: running /root/vhost_test/vms/0/run.sh
00:07:32.393   10:04:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@877 -- # /root/vhost_test/vms/0/run.sh
00:07:32.393  Running VM in /root/vhost_test/vms/0
00:07:32.962  Waiting for QEMU pid file
00:07:32.962  [2024-11-20 10:04:27.970390] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/0/muser/domain/muser0/0: enabling controller
00:07:33.901  === qemu.log ===
00:07:33.901  === qemu.log ===
00:07:33.901   10:04:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:07:33.901   10:04:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@871 -- # vm_is_running 1
00:07:33.901   10:04:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:07:33.901   10:04:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:33.901   10:04:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:07:33.901   10:04:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:07:33.901   10:04:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:07:33.901   10:04:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@373 -- # return 1
00:07:33.901   10:04:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:07:33.901   10:04:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:07:33.901   10:04:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:07:33.901   10:04:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:07:33.901   10:04:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:07:33.901   10:04:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:33.901   10:04:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:07:33.901   10:04:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:07:33.901  INFO: running /root/vhost_test/vms/1/run.sh
00:07:33.901   10:04:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:07:33.901  Running VM in /root/vhost_test/vms/1
00:07:34.159  Waiting for QEMU pid file
00:07:34.418  [2024-11-20 10:04:29.366436] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: enabling controller
00:07:35.357  === qemu.log ===
00:07:35.357  === qemu.log ===
00:07:35.357   10:04:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:07:35.357   10:04:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@871 -- # vm_is_running 2
00:07:35.357   10:04:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 2
00:07:35.357   10:04:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:35.357   10:04:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:07:35.357   10:04:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/2
00:07:35.357   10:04:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/2/qemu.pid ]]
00:07:35.357   10:04:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@373 -- # return 1
00:07:35.357   10:04:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/2/run.sh'
00:07:35.357   10:04:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/2/run.sh'
00:07:35.357   10:04:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:07:35.357   10:04:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:07:35.357   10:04:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:07:35.357   10:04:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:35.357   10:04:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:07:35.357   10:04:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/2/run.sh'
00:07:35.357  INFO: running /root/vhost_test/vms/2/run.sh
00:07:35.357   10:04:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@877 -- # /root/vhost_test/vms/2/run.sh
00:07:35.357  Running VM in /root/vhost_test/vms/2
00:07:35.357  Waiting for QEMU pid file
00:07:35.615  [2024-11-20 10:04:30.660268] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/2/muser/domain/muser2/2: enabling controller
00:07:36.549  === qemu.log ===
00:07:36.549  === qemu.log ===
00:07:36.549   10:04:31 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@53 -- # vm_wait_for_boot 60 0 1 2
00:07:36.549   10:04:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@913 -- # assert_number 60
00:07:36.549   10:04:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:07:36.549   10:04:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@281 -- # return 0
00:07:36.549   10:04:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@915 -- # xtrace_disable
00:07:36.549   10:04:31 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:07:36.549  INFO: Waiting for VMs to boot
00:07:36.549  INFO: waiting for VM0 (/root/vhost_test/vms/0)
00:07:46.520  [2024-11-20 10:04:40.477216] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/0/muser/domain/muser0/0: disabling controller
00:07:46.520  [2024-11-20 10:04:40.486244] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/0/muser/domain/muser0/0: disabling controller
00:07:46.520  [2024-11-20 10:04:40.490278] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/0/muser/domain/muser0/0: enabling controller
00:07:46.778  [2024-11-20 10:04:41.764095] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:07:46.778  [2024-11-20 10:04:41.773097] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:07:46.778  [2024-11-20 10:04:41.777127] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: enabling controller
00:07:48.153  [2024-11-20 10:04:42.996616] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/2/muser/domain/muser2/2: disabling controller
00:07:48.153  [2024-11-20 10:04:43.005625] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/2/muser/domain/muser2/2: disabling controller
00:07:48.153  [2024-11-20 10:04:43.009668] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/2/muser/domain/muser2/2: enabling controller
00:07:58.124  
00:07:58.124  INFO: VM0 ready
00:07:58.124  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:07:58.124  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:07:58.690  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:07:58.948  
00:07:58.948  INFO: VM1 ready
00:07:59.207  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:07:59.207  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:08:00.141  INFO: waiting for VM2 (/root/vhost_test/vms/2)
00:08:00.705  
00:08:00.705  INFO: VM2 ready
00:08:00.705  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:08:00.705  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:08:01.643  INFO: all VMs ready
00:08:01.643   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@973 -- # return 0
00:08:01.643   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@55 -- # timing_exit launch_vms
00:08:01.643   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:08:01.643   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:08:01.643   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@57 -- # timing_enter run_vm_cmd
00:08:01.643   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:01.643   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:08:01.643   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@59 -- # fio_disks=
00:08:01.643   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@61 -- # for vm_num in $used_vms
00:08:01.643   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@62 -- # qemu_mask_param=VM_0_qemu_mask
00:08:01.643   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@64 -- # host_name=VM-0-4-5
00:08:01.643   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@65 -- # vm_exec 0 'hostname VM-0-4-5'
00:08:01.643   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:08:01.643   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:01.643   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:01.643   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=0
00:08:01.643   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:01.643    10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:08:01.643    10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:08:01.643    10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:01.643    10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:01.643    10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:08:01.643    10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:08:01.643   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'hostname VM-0-4-5'
00:08:01.643  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:08:01.643   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@66 -- # vm_start_fio_server --fio-bin=/usr/src/fio-static/fio 0
00:08:01.643   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@977 -- # local OPTIND optchar
00:08:01.643   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@978 -- # local readonly=
00:08:01.643   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@979 -- # local fio_bin=
00:08:01.643   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:08:01.643   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@981 -- # case "$optchar" in
00:08:01.643   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@983 -- # case "$OPTARG" in
00:08:01.643   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@984 -- # local fio_bin=/usr/src/fio-static/fio
00:08:01.643   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:08:01.643   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@993 -- # shift 1
00:08:01.644   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@994 -- # for vm_num in "$@"
00:08:01.644   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@995 -- # notice 'Starting fio server on VM0'
00:08:01.644   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Starting fio server on VM0'
00:08:01.644   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:01.644   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:01.644   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:01.644   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:01.644   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:01.644   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Starting fio server on VM0'
00:08:01.644  INFO: Starting fio server on VM0
00:08:01.644   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@996 -- # [[ /usr/src/fio-static/fio != '' ]]
00:08:01.644   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@997 -- # vm_exec 0 'cat > /root/fio; chmod +x /root/fio'
00:08:01.644   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:08:01.644   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:01.644   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:01.644   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=0
00:08:01.644   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:01.644    10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:08:01.644    10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:08:01.644    10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:01.644    10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:01.644    10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:08:01.644    10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:08:01.644   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'cat > /root/fio; chmod +x /root/fio'
00:08:01.644  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:08:01.902   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@998 -- # vm_exec 0 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:08:01.902   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:08:01.902   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:01.902   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:01.902   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=0
00:08:01.902   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:01.902    10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:08:01.902    10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:08:01.902    10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:01.902    10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:01.902    10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:08:01.902    10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:08:01.902   10:04:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:08:01.902  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:08:02.161   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@67 -- # vm_check_nvme_location 0
00:08:02.161    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # vm_exec 0 'grep -l SPDK /sys/class/nvme/*/model'
00:08:02.161    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # awk -F/ '{print $5"n1"}'
00:08:02.161    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:08:02.161    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:02.161    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:02.161    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=0
00:08:02.161    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:02.161     10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:08:02.161     10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:08:02.161     10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:02.161     10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:02.161     10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:08:02.161     10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:08:02.161    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l SPDK /sys/class/nvme/*/model'
00:08:02.161  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:08:02.161   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # SCSI_DISK=nvme0n1
00:08:02.161   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1046 -- # [[ -z nvme0n1 ]]
00:08:02.161    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@69 -- # printf :/dev/%s nvme0n1
00:08:02.161   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@69 -- # fio_disks+=' --vm=0:/dev/nvme0n1'
00:08:02.161   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@61 -- # for vm_num in $used_vms
00:08:02.161   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@62 -- # qemu_mask_param=VM_1_qemu_mask
00:08:02.161   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@64 -- # host_name=VM-1-6-7
00:08:02.161   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@65 -- # vm_exec 1 'hostname VM-1-6-7'
00:08:02.161   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:08:02.161   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:02.161   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:02.161   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=1
00:08:02.161   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:02.161    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:08:02.161    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:08:02.161    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:02.161    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:02.161    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:08:02.161    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:08:02.161   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'hostname VM-1-6-7'
00:08:02.420  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:08:02.420   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@66 -- # vm_start_fio_server --fio-bin=/usr/src/fio-static/fio 1
00:08:02.420   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@977 -- # local OPTIND optchar
00:08:02.420   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@978 -- # local readonly=
00:08:02.420   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@979 -- # local fio_bin=
00:08:02.420   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:08:02.420   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@981 -- # case "$optchar" in
00:08:02.420   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@983 -- # case "$OPTARG" in
00:08:02.420   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@984 -- # local fio_bin=/usr/src/fio-static/fio
00:08:02.420   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:08:02.420   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@993 -- # shift 1
00:08:02.420   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@994 -- # for vm_num in "$@"
00:08:02.420   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@995 -- # notice 'Starting fio server on VM1'
00:08:02.420   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Starting fio server on VM1'
00:08:02.420   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:02.420   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:02.420   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:02.420   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:02.420   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:02.420   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Starting fio server on VM1'
00:08:02.420  INFO: Starting fio server on VM1
00:08:02.420   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@996 -- # [[ /usr/src/fio-static/fio != '' ]]
00:08:02.420   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@997 -- # vm_exec 1 'cat > /root/fio; chmod +x /root/fio'
00:08:02.420   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:08:02.420   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:02.420   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:02.420   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=1
00:08:02.420   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:02.420    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:08:02.420    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:08:02.420    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:02.420    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:02.420    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:08:02.420    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:08:02.420   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/fio; chmod +x /root/fio'
00:08:02.420  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:08:02.678   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@998 -- # vm_exec 1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:08:02.678   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:08:02.678   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:02.678   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:02.678   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=1
00:08:02.678   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:02.678    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:08:02.678    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:08:02.678    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:02.678    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:02.678    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:08:02.678    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:08:02.678   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:08:02.678  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:08:02.938   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@67 -- # vm_check_nvme_location 1
00:08:02.938    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # vm_exec 1 'grep -l SPDK /sys/class/nvme/*/model'
00:08:02.938    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:08:02.938    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # awk -F/ '{print $5"n1"}'
00:08:02.938    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:02.938    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:02.938    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=1
00:08:02.938    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:02.938     10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:08:02.938     10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:08:02.938     10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:02.938     10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:02.938     10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:08:02.938     10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:08:02.938    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'grep -l SPDK /sys/class/nvme/*/model'
00:08:02.938  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:08:02.938   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # SCSI_DISK=nvme0n1
00:08:02.938   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1046 -- # [[ -z nvme0n1 ]]
00:08:02.938    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@69 -- # printf :/dev/%s nvme0n1
00:08:02.938   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@69 -- # fio_disks+=' --vm=1:/dev/nvme0n1'
00:08:02.938   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@61 -- # for vm_num in $used_vms
00:08:02.938   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@62 -- # qemu_mask_param=VM_2_qemu_mask
00:08:02.938   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@64 -- # host_name=VM-2-8-9
00:08:02.938   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@65 -- # vm_exec 2 'hostname VM-2-8-9'
00:08:02.938   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 2
00:08:02.938   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:02.938   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:02.938   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=2
00:08:02.938   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:02.938    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 2
00:08:02.938    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 2
00:08:02.938    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:02.938    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:02.938    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/2
00:08:02.938    10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/2/ssh_socket
00:08:02.938   10:04:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 'hostname VM-2-8-9'
00:08:02.938  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:08:03.197   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@66 -- # vm_start_fio_server --fio-bin=/usr/src/fio-static/fio 2
00:08:03.197   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@977 -- # local OPTIND optchar
00:08:03.197   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@978 -- # local readonly=
00:08:03.197   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@979 -- # local fio_bin=
00:08:03.197   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:08:03.197   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@981 -- # case "$optchar" in
00:08:03.197   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@983 -- # case "$OPTARG" in
00:08:03.197   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@984 -- # local fio_bin=/usr/src/fio-static/fio
00:08:03.197   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:08:03.197   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@993 -- # shift 1
00:08:03.197   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@994 -- # for vm_num in "$@"
00:08:03.197   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@995 -- # notice 'Starting fio server on VM2'
00:08:03.197   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Starting fio server on VM2'
00:08:03.197   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:03.197   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:03.197   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:03.197   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:03.197   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:03.197   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Starting fio server on VM2'
00:08:03.197  INFO: Starting fio server on VM2
00:08:03.197   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@996 -- # [[ /usr/src/fio-static/fio != '' ]]
00:08:03.197   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@997 -- # vm_exec 2 'cat > /root/fio; chmod +x /root/fio'
00:08:03.197   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 2
00:08:03.197   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:03.197   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:03.197   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=2
00:08:03.197   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:03.197    10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 2
00:08:03.197    10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 2
00:08:03.197    10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:03.197    10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:03.197    10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/2
00:08:03.197    10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/2/ssh_socket
00:08:03.197   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 'cat > /root/fio; chmod +x /root/fio'
00:08:03.197  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:08:03.455   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@998 -- # vm_exec 2 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:08:03.455   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 2
00:08:03.455   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:03.455   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:03.455   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=2
00:08:03.455   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:03.455    10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 2
00:08:03.456    10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 2
00:08:03.456    10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:03.456    10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:03.456    10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/2
00:08:03.456    10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/2/ssh_socket
00:08:03.456   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:08:03.456  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:08:03.456   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@67 -- # vm_check_nvme_location 2
00:08:03.456    10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # vm_exec 2 'grep -l SPDK /sys/class/nvme/*/model'
00:08:03.456    10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # awk -F/ '{print $5"n1"}'
00:08:03.456    10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 2
00:08:03.456    10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:03.456    10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:03.456    10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=2
00:08:03.456    10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:03.456     10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 2
00:08:03.456     10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 2
00:08:03.456     10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:03.456     10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:03.456     10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/2
00:08:03.456     10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/2/ssh_socket
00:08:03.456    10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 'grep -l SPDK /sys/class/nvme/*/model'
00:08:03.715  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # SCSI_DISK=nvme0n1
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1046 -- # [[ -z nvme0n1 ]]
00:08:03.715    10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@69 -- # printf :/dev/%s nvme0n1
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@69 -- # fio_disks+=' --vm=2:/dev/nvme0n1'
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@72 -- # job_file=default_integrity.job
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@73 -- # run_fio --fio-bin=/usr/src/fio-static/fio --job-file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job --out=/root/vhost_test/fio_results --vm=0:/dev/nvme0n1 --vm=1:/dev/nvme0n1 --vm=2:/dev/nvme0n1
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1053 -- # local arg
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1054 -- # local job_file=
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1055 -- # local fio_bin=
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1056 -- # vms=()
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1056 -- # local vms
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1057 -- # local out=
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1058 -- # local vm
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1059 -- # local run_server_mode=true
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1060 -- # local run_plugin_mode=false
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1061 -- # local fio_start_cmd
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1062 -- # local fio_output_format=normal
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1063 -- # local fio_gtod_reduce=false
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1064 -- # local wait_for_fio=true
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1069 -- # local fio_bin=/usr/src/fio-static/fio
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1068 -- # local job_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1072 -- # local out=/root/vhost_test/fio_results
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1073 -- # mkdir -p /root/vhost_test/fio_results
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1070 -- # vms+=("${arg#*=}")
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1070 -- # vms+=("${arg#*=}")
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1070 -- # vms+=("${arg#*=}")
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1092 -- # [[ -n /usr/src/fio-static/fio ]]
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1092 -- # [[ ! -r /usr/src/fio-static/fio ]]
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1097 -- # [[ -z /usr/src/fio-static/fio ]]
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1101 -- # [[ ! -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job ]]
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1106 -- # fio_start_cmd='/usr/src/fio-static/fio --eta=never '
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1108 -- # local job_fname
00:08:03.715    10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1109 -- # basename /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1109 -- # job_fname=default_integrity.job
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1110 -- # log_fname=default_integrity.log
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1111 -- # fio_start_cmd+=' --output=/root/vhost_test/fio_results/default_integrity.log --output-format=normal '
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1114 -- # for vm in "${vms[@]}"
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1115 -- # local vm_num=0
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1116 -- # local vmdisks=/dev/nvme0n1
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1118 -- # sed 's@filename=@filename=/dev/nvme0n1@;s@description=\(.*\)@description=\1 (VM=0)@' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1119 -- # vm_exec 0 'cat > /root/default_integrity.job'
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=0
00:08:03.715   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:03.716    10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:08:03.716    10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:08:03.716    10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:03.716    10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:03.716    10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:08:03.716    10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:08:03.716   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'cat > /root/default_integrity.job'
00:08:03.716  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:08:03.975   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1121 -- # false
00:08:03.975   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1125 -- # vm_exec 0 cat /root/default_integrity.job
00:08:03.975   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:08:03.975   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:03.975   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:03.975   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=0
00:08:03.975   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:03.975    10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:08:03.975    10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:08:03.975    10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:03.975    10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:03.975    10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:08:03.975    10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:08:03.975   10:04:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 cat /root/default_integrity.job
00:08:03.975  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:08:03.975  [global]
00:08:03.975  blocksize_range=4k-512k
00:08:03.975  iodepth=512
00:08:03.975  iodepth_batch=128
00:08:03.975  iodepth_low=256
00:08:03.975  ioengine=libaio
00:08:03.975  size=1G
00:08:03.975  io_size=4G
00:08:03.975  filename=/dev/nvme0n1
00:08:03.975  group_reporting
00:08:03.975  thread
00:08:03.975  numjobs=1
00:08:03.975  direct=1
00:08:03.975  rw=randwrite
00:08:03.975  do_verify=1
00:08:03.975  verify=md5
00:08:03.975  verify_backlog=1024
00:08:03.975  fsync_on_close=1
00:08:03.975  verify_state_save=0
00:08:03.975  [nvme-host]
00:08:03.975   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1127 -- # true
00:08:03.975    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1128 -- # vm_fio_socket 0
00:08:03.975    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@326 -- # vm_num_is_valid 0
00:08:03.975    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:03.975    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:03.975    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@327 -- # local vm_dir=/root/vhost_test/vms/0
00:08:03.975    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@329 -- # cat /root/vhost_test/vms/0/fio_socket
00:08:03.975   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1128 -- # fio_start_cmd+='--client=127.0.0.1,10001 --remote-config /root/default_integrity.job '
00:08:03.975   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1131 -- # true
00:08:03.976   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1114 -- # for vm in "${vms[@]}"
00:08:03.976   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1115 -- # local vm_num=1
00:08:03.976   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1116 -- # local vmdisks=/dev/nvme0n1
00:08:03.976   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1118 -- # sed 's@filename=@filename=/dev/nvme0n1@;s@description=\(.*\)@description=\1 (VM=1)@' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:08:03.976   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1119 -- # vm_exec 1 'cat > /root/default_integrity.job'
00:08:03.976   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:08:03.976   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:03.976   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:03.976   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=1
00:08:03.976   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:03.976    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:08:03.976    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:08:03.976    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:03.976    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:03.976    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:08:03.976    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:08:03.976   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/default_integrity.job'
00:08:04.235  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:08:04.235   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1121 -- # false
00:08:04.235   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1125 -- # vm_exec 1 cat /root/default_integrity.job
00:08:04.235   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:08:04.235   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:04.235   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:04.235   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=1
00:08:04.235   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:04.235    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:08:04.235    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:08:04.235    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:04.235    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:04.235    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:08:04.236    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:08:04.236   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 cat /root/default_integrity.job
00:08:04.236  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:08:04.495  [global]
00:08:04.495  blocksize_range=4k-512k
00:08:04.495  iodepth=512
00:08:04.495  iodepth_batch=128
00:08:04.495  iodepth_low=256
00:08:04.495  ioengine=libaio
00:08:04.495  size=1G
00:08:04.495  io_size=4G
00:08:04.495  filename=/dev/nvme0n1
00:08:04.495  group_reporting
00:08:04.495  thread
00:08:04.495  numjobs=1
00:08:04.495  direct=1
00:08:04.495  rw=randwrite
00:08:04.495  do_verify=1
00:08:04.495  verify=md5
00:08:04.495  verify_backlog=1024
00:08:04.495  fsync_on_close=1
00:08:04.495  verify_state_save=0
00:08:04.495  [nvme-host]
00:08:04.495   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1127 -- # true
00:08:04.495    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1128 -- # vm_fio_socket 1
00:08:04.495    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@326 -- # vm_num_is_valid 1
00:08:04.495    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:04.495    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:04.495    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@327 -- # local vm_dir=/root/vhost_test/vms/1
00:08:04.495    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@329 -- # cat /root/vhost_test/vms/1/fio_socket
00:08:04.495   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1128 -- # fio_start_cmd+='--client=127.0.0.1,10101 --remote-config /root/default_integrity.job '
00:08:04.495   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1131 -- # true
00:08:04.495   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1114 -- # for vm in "${vms[@]}"
00:08:04.495   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1115 -- # local vm_num=2
00:08:04.495   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1116 -- # local vmdisks=/dev/nvme0n1
00:08:04.495   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1118 -- # sed 's@filename=@filename=/dev/nvme0n1@;s@description=\(.*\)@description=\1 (VM=2)@' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:08:04.495   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1119 -- # vm_exec 2 'cat > /root/default_integrity.job'
00:08:04.495   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 2
00:08:04.495   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:04.495   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:04.495   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=2
00:08:04.495   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:04.495    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 2
00:08:04.495    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 2
00:08:04.495    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:04.495    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:04.495    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/2
00:08:04.496    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/2/ssh_socket
00:08:04.496   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 'cat > /root/default_integrity.job'
00:08:04.496  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:08:04.496   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1121 -- # false
00:08:04.496   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1125 -- # vm_exec 2 cat /root/default_integrity.job
00:08:04.496   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 2
00:08:04.496   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:04.496   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:04.496   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=2
00:08:04.496   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:04.496    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 2
00:08:04.496    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 2
00:08:04.496    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:04.496    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:04.496    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/2
00:08:04.496    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/2/ssh_socket
00:08:04.496   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 cat /root/default_integrity.job
00:08:04.496  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:08:04.754  [global]
00:08:04.754  blocksize_range=4k-512k
00:08:04.754  iodepth=512
00:08:04.754  iodepth_batch=128
00:08:04.755  iodepth_low=256
00:08:04.755  ioengine=libaio
00:08:04.755  size=1G
00:08:04.755  io_size=4G
00:08:04.755  filename=/dev/nvme0n1
00:08:04.755  group_reporting
00:08:04.755  thread
00:08:04.755  numjobs=1
00:08:04.755  direct=1
00:08:04.755  rw=randwrite
00:08:04.755  do_verify=1
00:08:04.755  verify=md5
00:08:04.755  verify_backlog=1024
00:08:04.755  fsync_on_close=1
00:08:04.755  verify_state_save=0
00:08:04.755  [nvme-host]
00:08:04.755   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1127 -- # true
00:08:04.755    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1128 -- # vm_fio_socket 2
00:08:04.755    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@326 -- # vm_num_is_valid 2
00:08:04.755    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:04.755    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:04.755    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@327 -- # local vm_dir=/root/vhost_test/vms/2
00:08:04.755    10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@329 -- # cat /root/vhost_test/vms/2/fio_socket
00:08:04.755   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1128 -- # fio_start_cmd+='--client=127.0.0.1,10201 --remote-config /root/default_integrity.job '
00:08:04.755   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1131 -- # true
00:08:04.755   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1147 -- # true
00:08:04.755   10:04:59 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1161 -- # /usr/src/fio-static/fio --eta=never --output=/root/vhost_test/fio_results/default_integrity.log --output-format=normal --client=127.0.0.1,10001 --remote-config /root/default_integrity.job --client=127.0.0.1,10101 --remote-config /root/default_integrity.job --client=127.0.0.1,10201 --remote-config /root/default_integrity.job
00:08:19.663   10:05:14 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1162 -- # sleep 1
00:08:20.262   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1164 -- # [[ normal == \j\s\o\n ]]
00:08:20.262   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1172 -- # [[ ! -n '' ]]
00:08:20.262   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1173 -- # cat /root/vhost_test/fio_results/default_integrity.log
00:08:20.262  hostname=VM-2-8-9, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-3.35, flags=1
00:08:20.262  hostname=VM-1-6-7, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-3.35, flags=1
00:08:20.262  hostname=VM-0-4-5, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-3.35, flags=1
00:08:20.262  <VM-2-8-9> nvme-host: (g=0): rw=randwrite, bs=(R) 4096B-512KiB, (W) 4096B-512KiB, (T) 4096B-512KiB, ioengine=libaio, iodepth=512
00:08:20.262  <VM-1-6-7> nvme-host: (g=0): rw=randwrite, bs=(R) 4096B-512KiB, (W) 4096B-512KiB, (T) 4096B-512KiB, ioengine=libaio, iodepth=512
00:08:20.262  <VM-0-4-5> nvme-host: (g=0): rw=randwrite, bs=(R) 4096B-512KiB, (W) 4096B-512KiB, (T) 4096B-512KiB, ioengine=libaio, iodepth=512
00:08:20.262  <VM-1-6-7> Starting 1 thread
00:08:20.262  <VM-2-8-9> Starting 1 thread
00:08:20.262  <VM-0-4-5> Starting 1 thread
00:08:20.262  <VM-1-6-7> 
00:08:20.262  nvme-host: (groupid=0, jobs=1): err= 0: pid=944: Wed Nov 20 10:05:11 2024
00:08:20.262    read: IOPS=1027, BW=200MiB/s (210MB/s)(2072MiB/10353msec)
00:08:20.262      slat (usec): min=24, max=32379, avg=11230.16, stdev=8105.69
00:08:20.262      clat (usec): min=394, max=53764, avg=23495.02, stdev=13729.67
00:08:20.262       lat (usec): min=3557, max=54996, avg=34725.18, stdev=12730.31
00:08:20.262      clat percentiles (usec):
00:08:20.262       |  1.00th=[  578],  5.00th=[  824], 10.00th=[11469], 20.00th=[13042],
00:08:20.262       | 30.00th=[13960], 40.00th=[15270], 50.00th=[15926], 60.00th=[28443],
00:08:20.262       | 70.00th=[31851], 80.00th=[34341], 90.00th=[46924], 95.00th=[49546],
00:08:20.262       | 99.00th=[51643], 99.50th=[53740], 99.90th=[53740], 99.95th=[53740],
00:08:20.262       | 99.99th=[53740]
00:08:20.262    write: IOPS=2092, BW=407MiB/s (427MB/s)(2072MiB/5085msec); 0 zone resets
00:08:20.262      slat (usec): min=262, max=71601, avg=26853.45, stdev=16660.49
00:08:20.263      clat (usec): min=383, max=189865, avg=63815.93, stdev=46966.35
00:08:20.263       lat (msec): min=4, max=198, avg=90.67, stdev=52.25
00:08:20.263      clat percentiles (msec):
00:08:20.263       |  1.00th=[    4],  5.00th=[    8], 10.00th=[   11], 20.00th=[   15],
00:08:20.263       | 30.00th=[   22], 40.00th=[   41], 50.00th=[   59], 60.00th=[   68],
00:08:20.263       | 70.00th=[   95], 80.00th=[  117], 90.00th=[  129], 95.00th=[  142],
00:08:20.263       | 99.00th=[  171], 99.50th=[  174], 99.90th=[  188], 99.95th=[  190],
00:08:20.263       | 99.99th=[  190]
00:08:20.263     bw (  KiB/s): min=157144, max=314288, per=48.42%, avg=202012.38, stdev=72695.20, samples=21
00:08:20.263     iops        : min=  788, max= 1576, avg=1012.95, stdev=364.47, samples=21
00:08:20.263    lat (usec)   : 500=0.63%, 750=1.10%, 1000=1.56%
00:08:20.263    lat (msec)   : 2=0.32%, 4=1.16%, 10=3.99%, 20=32.17%, 50=29.33%
00:08:20.263    lat (msec)   : 100=16.42%, 250=13.33%
00:08:20.263    cpu          : usr=81.97%, sys=1.88%, ctx=685, majf=0, minf=17
00:08:20.263    IO depths    : 1=0.0%, 2=0.6%, 4=1.2%, 8=1.8%, 16=3.6%, 32=7.8%, >=64=84.8%
00:08:20.263       submit    : 0=0.0%, 4=1.8%, 8=1.8%, 16=3.2%, 32=6.4%, 64=11.8%, >=64=75.0%
00:08:20.263       complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0%
00:08:20.263       issued rwts: total=10638,10638,0,0 short=0,0,0,0 dropped=0,0,0,0
00:08:20.263       latency   : target=0, window=0, percentile=100.00%, depth=512
00:08:20.263  
00:08:20.263  Run status group 0 (all jobs):
00:08:20.263     READ: bw=200MiB/s (210MB/s), 200MiB/s-200MiB/s (210MB/s-210MB/s), io=2072MiB (2172MB), run=10353-10353msec
00:08:20.263    WRITE: bw=407MiB/s (427MB/s), 407MiB/s-407MiB/s (427MB/s-427MB/s), io=2072MiB (2172MB), run=5085-5085msec
00:08:20.263  
00:08:20.263  Disk stats (read/write):
00:08:20.263    nvme0n1: ios=80/0, merge=0/0, ticks=12/0, in_queue=12, util=24.06%
00:08:20.263  <VM-0-4-5> 
00:08:20.263  nvme-host: (groupid=0, jobs=1): err= 0: pid=954: Wed Nov 20 10:05:11 2024
00:08:20.263    read: IOPS=1002, BW=195MiB/s (205MB/s)(2072MiB/10614msec)
00:08:20.263      slat (usec): min=26, max=31123, avg=12479.43, stdev=8579.79
00:08:20.263      clat (usec): min=578, max=62132, avg=26030.59, stdev=14747.98
00:08:20.263       lat (usec): min=1789, max=62726, avg=38510.02, stdev=14135.53
00:08:20.263      clat percentiles (usec):
00:08:20.263       |  1.00th=[  603],  5.00th=[ 7308], 10.00th=[11994], 20.00th=[13304],
00:08:20.263       | 30.00th=[14484], 40.00th=[15664], 50.00th=[24511], 60.00th=[29754],
00:08:20.263       | 70.00th=[34341], 80.00th=[41681], 90.00th=[47449], 95.00th=[51643],
00:08:20.263       | 99.00th=[59507], 99.50th=[62129], 99.90th=[62129], 99.95th=[62129],
00:08:20.263       | 99.99th=[62129]
00:08:20.263    write: IOPS=2046, BW=398MiB/s (418MB/s)(2072MiB/5199msec); 0 zone resets
00:08:20.263      slat (usec): min=272, max=81536, avg=27815.51, stdev=17300.42
00:08:20.263      clat (msec): min=3, max=191, avg=65.70, stdev=48.06
00:08:20.263       lat (msec): min=4, max=206, avg=93.52, stdev=53.50
00:08:20.263      clat percentiles (msec):
00:08:20.263       |  1.00th=[    6],  5.00th=[    8], 10.00th=[   11], 20.00th=[   16],
00:08:20.263       | 30.00th=[   22], 40.00th=[   41], 50.00th=[   61], 60.00th=[   68],
00:08:20.263       | 70.00th=[   97], 80.00th=[  117], 90.00th=[  130], 95.00th=[  150],
00:08:20.263       | 99.00th=[  171], 99.50th=[  178], 99.90th=[  188], 99.95th=[  188],
00:08:20.263       | 99.99th=[  192]
00:08:20.263     bw (  KiB/s): min=157144, max=314288, per=47.68%, avg=194559.24, stdev=68583.26, samples=21
00:08:20.263     iops        : min=  788, max= 1576, avg=975.62, stdev=343.91, samples=21
00:08:20.263    lat (usec)   : 750=1.18%
00:08:20.263    lat (msec)   : 2=0.64%, 4=0.46%, 10=6.42%, 20=28.61%, 50=31.03%
00:08:20.263    lat (msec)   : 100=17.73%, 250=13.93%
00:08:20.263    cpu          : usr=80.02%, sys=1.76%, ctx=709, majf=0, minf=17
00:08:20.263    IO depths    : 1=0.0%, 2=0.6%, 4=1.2%, 8=1.8%, 16=3.6%, 32=7.8%, >=64=84.8%
00:08:20.263       submit    : 0=0.0%, 4=1.8%, 8=1.8%, 16=3.2%, 32=6.4%, 64=11.8%, >=64=75.0%
00:08:20.263       complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0%
00:08:20.263       issued rwts: total=10638,10638,0,0 short=0,0,0,0 dropped=0,0,0,0
00:08:20.263       latency   : target=0, window=0, percentile=100.00%, depth=512
00:08:20.263  
00:08:20.263  Run status group 0 (all jobs):
00:08:20.263     READ: bw=195MiB/s (205MB/s), 195MiB/s-195MiB/s (205MB/s-205MB/s), io=2072MiB (2172MB), run=10614-10614msec
00:08:20.263    WRITE: bw=398MiB/s (418MB/s), 398MiB/s-398MiB/s (418MB/s-418MB/s), io=2072MiB (2172MB), run=5199-5199msec
00:08:20.263  
00:08:20.263  Disk stats (read/write):
00:08:20.263    nvme0n1: ios=80/0, merge=0/0, ticks=5/0, in_queue=5, util=25.17%
00:08:20.263  <VM-2-8-9> 
00:08:20.263  nvme-host: (groupid=0, jobs=1): err= 0: pid=943: Wed Nov 20 10:05:14 2024
00:08:20.263    read: IOPS=937, BW=157MiB/s (165MB/s)(2048MiB/13027msec)
00:08:20.263      slat (usec): min=56, max=165758, avg=32423.53, stdev=36357.32
00:08:20.263      clat (msec): min=15, max=458, avg=217.39, stdev=88.50
00:08:20.263       lat (msec): min=23, max=512, avg=249.81, stdev=100.52
00:08:20.263      clat percentiles (msec):
00:08:20.263       |  1.00th=[   47],  5.00th=[   72], 10.00th=[   91], 20.00th=[  136],
00:08:20.263       | 30.00th=[  171], 40.00th=[  194], 50.00th=[  215], 60.00th=[  247],
00:08:20.263       | 70.00th=[  275], 80.00th=[  296], 90.00th=[  334], 95.00th=[  363],
00:08:20.263       | 99.00th=[  422], 99.50th=[  435], 99.90th=[  451], 99.95th=[  456],
00:08:20.263       | 99.99th=[  460]
00:08:20.263    write: IOPS=1021, BW=171MiB/s (180MB/s)(2048MiB/11953msec); 0 zone resets
00:08:20.263      slat (usec): min=281, max=94931, avg=28116.04, stdev=17761.40
00:08:20.263      clat (msec): min=8, max=347, avg=144.76, stdev=66.29
00:08:20.263       lat (msec): min=9, max=390, avg=172.88, stdev=70.84
00:08:20.263      clat percentiles (msec):
00:08:20.263       |  1.00th=[   14],  5.00th=[   46], 10.00th=[   68], 20.00th=[   93],
00:08:20.263       | 30.00th=[  104], 40.00th=[  120], 50.00th=[  136], 60.00th=[  148],
00:08:20.263       | 70.00th=[  176], 80.00th=[  199], 90.00th=[  236], 95.00th=[  266],
00:08:20.263       | 99.00th=[  317], 99.50th=[  317], 99.90th=[  330], 99.95th=[  338],
00:08:20.263       | 99.99th=[  347]
00:08:20.263     bw (  KiB/s): min= 1048, max=471000, per=100.00%, avg=209715.20, stdev=127716.60, samples=20
00:08:20.263     iops        : min=    6, max= 2048, avg=1220.80, stdev=710.12, samples=20
00:08:20.263    lat (msec)   : 10=0.36%, 20=0.69%, 50=2.90%, 100=15.11%, 250=57.58%
00:08:20.263    lat (msec)   : 500=23.36%
00:08:20.263    cpu          : usr=66.15%, sys=1.73%, ctx=2772, majf=0, minf=34
00:08:20.263    IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.5%, >=64=99.1%
00:08:20.263       submit    : 0=0.0%, 4=0.0%, 8=1.2%, 16=0.0%, 32=0.0%, 64=19.2%, >=64=79.6%
00:08:20.263       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:08:20.263       issued rwts: total=12208,12208,0,0 short=0,0,0,0 dropped=0,0,0,0
00:08:20.263       latency   : target=0, window=0, percentile=100.00%, depth=512
00:08:20.263  
00:08:20.263  Run status group 0 (all jobs):
00:08:20.263     READ: bw=157MiB/s (165MB/s), 157MiB/s-157MiB/s (165MB/s-165MB/s), io=2048MiB (2147MB), run=13027-13027msec
00:08:20.263    WRITE: bw=171MiB/s (180MB/s), 171MiB/s-171MiB/s (180MB/s-180MB/s), io=2048MiB (2147MB), run=11953-11953msec
00:08:20.263  
00:08:20.263  Disk stats (read/write):
00:08:20.263    nvme0n1: ios=5/0, merge=0/0, ticks=0/0, in_queue=0, util=23.74%
00:08:20.263  All clients: (groupid=0, jobs=3): err= 0: pid=0: Wed Nov 20 10:05:14 2024
00:08:20.263    read: IOPS=2570, BW=475Mi (498M)(6191MiB/13027msec)
00:08:20.263      slat (usec): min=24, max=165758, avg=19354.00, stdev=24988.65
00:08:20.263      clat (usec): min=394, max=458275, avg=94991.37, stdev=107615.11
00:08:20.263       lat (usec): min=1789, max=512569, avg=114345.36, stdev=119709.97
00:08:20.263    write: IOPS=2801, BW=518Mi (543M)(6191MiB/11953msec); 0 zone resets
00:08:20.263      slat (usec): min=262, max=94931, avg=27619.43, stdev=17278.99
00:08:20.263      clat (usec): min=383, max=347819, avg=93926.77, stdev=67232.07
00:08:20.263       lat (msec): min=4, max=390, avg=121.55, stdev=71.55
00:08:20.263     bw (  KiB/s): min=315336, max=1099576, per=60.59%, avg=606286.82, stdev=91408.31, samples=62
00:08:20.263     iops        : min= 1582, max= 5200, avg=3209.37, stdev=489.28, samples=62
00:08:20.263    lat (usec)   : 500=0.20%, 750=0.73%, 1000=0.50%
00:08:20.263    lat (msec)   : 2=0.30%, 4=0.51%, 10=3.44%, 20=19.56%, 50=20.24%
00:08:20.263    lat (msec)   : 100=16.36%, 250=29.65%, 500=8.52%
00:08:20.263    cpu          : usr=75.37%, sys=1.78%, ctx=4166, majf=0, minf=68
00:08:20.263    IO depths    : 1=0.0%, 2=0.4%, 4=0.8%, 8=1.1%, 16=2.3%, 32=5.2%, >=64=90.0%
00:08:20.263       submit    : 0=0.0%, 4=1.2%, 8=1.6%, 16=2.1%, 32=4.1%, 64=14.4%, >=64=76.6%
00:08:20.263       complete  : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5%
00:08:20.263       issued rwts: total=33484,33484,0,0 short=0,0,0,0 dropped=0,0,0,0
00:08:20.263   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@75 -- # timing_exit run_vm_cmd
00:08:20.263   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:08:20.263   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:08:20.263   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@77 -- # vm_shutdown_all
00:08:20.263   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:08:20.263   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:08:20.263    10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@489 -- # vm_list_all
00:08:20.263    10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@466 -- # vms=()
00:08:20.263    10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@466 -- # local vms
00:08:20.263    10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:08:20.263    10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@468 -- # (( 3 > 0 ))
00:08:20.263    10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/0 /root/vhost_test/vms/1 /root/vhost_test/vms/2
00:08:20.263   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:08:20.263   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@492 -- # vm_shutdown 0
00:08:20.263   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@417 -- # vm_num_is_valid 0
00:08:20.263   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:20.263   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:20.263   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/0
00:08:20.263   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/0 ]]
00:08:20.263   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@424 -- # vm_is_running 0
00:08:20.263   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:08:20.263   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:20.263   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:20.264   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:08:20.264   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:08:20.264   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@376 -- # local vm_pid
00:08:20.264    10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/0/qemu.pid
00:08:20.264   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # vm_pid=1754539
00:08:20.264   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@379 -- # /bin/kill -0 1754539
00:08:20.264   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@380 -- # return 0
00:08:20.264   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/0'
00:08:20.264   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/0'
00:08:20.264   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:20.264   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:20.264   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:20.264   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:20.264   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:20.264   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/0'
00:08:20.264  INFO: Shutting down virtual machine /root/vhost_test/vms/0
00:08:20.264   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@432 -- # set +e
00:08:20.264   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@433 -- # vm_exec 0 'nohup sh -c '\''shutdown -h -P now'\'''
00:08:20.264   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:08:20.264   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:20.264   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:20.264   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=0
00:08:20.264   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:20.264    10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:08:20.264    10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:08:20.264    10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:20.264    10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:20.264    10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:08:20.264    10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:08:20.264   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:08:20.264  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:08:20.522  Connection to 127.0.0.1 closed by remote host.
00:08:20.522   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@433 -- # true
00:08:20.522   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@434 -- # notice 'VM0 is shutting down - wait a while to complete'
00:08:20.522   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'VM0 is shutting down - wait a while to complete'
00:08:20.522   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:20.522   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:20.522   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:20.522   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:20.522   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:20.522   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: VM0 is shutting down - wait a while to complete'
00:08:20.522  INFO: VM0 is shutting down - wait a while to complete
00:08:20.522   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@435 -- # set -e
00:08:20.522   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:08:20.522   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@492 -- # vm_shutdown 1
00:08:20.522   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:08:20.522   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:20.522   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:20.522   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:08:20.522   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:08:20.522   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@424 -- # vm_is_running 1
00:08:20.522   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:08:20.522   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:20.522   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:20.522   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:08:20.522   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:08:20.522   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@376 -- # local vm_pid
00:08:20.522    10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:08:20.522   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # vm_pid=1754706
00:08:20.522   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@379 -- # /bin/kill -0 1754706
00:08:20.522   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@380 -- # return 0
00:08:20.522   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:08:20.522   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:08:20.522   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:20.522   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:20.522   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:20.522   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:20.522   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:20.522   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:08:20.522  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:08:20.523   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@432 -- # set +e
00:08:20.523   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:08:20.523   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:08:20.523   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:20.523   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:20.523   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=1
00:08:20.523   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:20.523    10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:08:20.523    10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:08:20.523    10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:20.523    10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:20.523    10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:08:20.523    10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:08:20.523   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:08:20.523  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:08:20.782   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:08:20.782   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:08:20.782   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:20.782   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:20.782   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:20.782   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:20.782   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:20.782   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:08:20.782  INFO: VM1 is shutting down - wait a while to complete
00:08:20.782   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@435 -- # set -e
00:08:20.782   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:08:20.782   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@492 -- # vm_shutdown 2
00:08:20.782   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@417 -- # vm_num_is_valid 2
00:08:20.782   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:20.782   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:20.782   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/2
00:08:20.782   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/2 ]]
00:08:20.782   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@424 -- # vm_is_running 2
00:08:20.782   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 2
00:08:20.782   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:20.782   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:20.782   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/2
00:08:20.782   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/2/qemu.pid ]]
00:08:20.782   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@376 -- # local vm_pid
00:08:20.783    10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/2/qemu.pid
00:08:20.783   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # vm_pid=1754977
00:08:20.783   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@379 -- # /bin/kill -0 1754977
00:08:20.783   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@380 -- # return 0
00:08:20.783   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/2'
00:08:20.783   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/2'
00:08:20.783   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:20.783   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:20.783   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:20.783   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:20.783   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:20.783   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/2'
00:08:20.783  INFO: Shutting down virtual machine /root/vhost_test/vms/2
00:08:20.783   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@432 -- # set +e
00:08:20.783   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@433 -- # vm_exec 2 'nohup sh -c '\''shutdown -h -P now'\'''
00:08:20.783   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 2
00:08:20.783   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:20.783   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:20.783   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=2
00:08:20.783   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:20.783    10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 2
00:08:20.783    10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 2
00:08:20.783    10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:20.783    10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:20.783    10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/2
00:08:20.783    10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/2/ssh_socket
00:08:20.783   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:08:20.783  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@434 -- # notice 'VM2 is shutting down - wait a while to complete'
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'VM2 is shutting down - wait a while to complete'
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: VM2 is shutting down - wait a while to complete'
00:08:21.043  INFO: VM2 is shutting down - wait a while to complete
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@435 -- # set -e
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:08:21.043  INFO: Waiting for VMs to shutdown...
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@496 -- # (( timeo-- > 0 && 3 > 0 ))
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # vm_is_running 0
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@376 -- # local vm_pid
00:08:21.043    10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/0/qemu.pid
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # vm_pid=1754539
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@379 -- # /bin/kill -0 1754539
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@380 -- # return 0
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # vm_is_running 1
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@376 -- # local vm_pid
00:08:21.043    10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # vm_pid=1754706
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@379 -- # /bin/kill -0 1754706
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@380 -- # return 0
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # vm_is_running 2
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 2
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/2
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/2/qemu.pid ]]
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@376 -- # local vm_pid
00:08:21.043    10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/2/qemu.pid
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # vm_pid=1754977
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@379 -- # /bin/kill -0 1754977
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@380 -- # return 0
00:08:21.043   10:05:15 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@500 -- # sleep 1
00:08:21.609  [2024-11-20 10:05:16.450572] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/0/muser/domain/muser0/0: disabling controller
00:08:21.868  [2024-11-20 10:05:16.822632] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:08:21.868  [2024-11-20 10:05:16.856149] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/2/muser/domain/muser2/2: disabling controller
00:08:22.127   10:05:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@496 -- # (( timeo-- > 0 && 3 > 0 ))
00:08:22.127   10:05:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:08:22.127   10:05:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # vm_is_running 0
00:08:22.127   10:05:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:08:22.127   10:05:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:22.127   10:05:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:22.127   10:05:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:08:22.127   10:05:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:08:22.127   10:05:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@373 -- # return 1
00:08:22.127   10:05:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:08:22.127   10:05:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:08:22.127   10:05:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # vm_is_running 1
00:08:22.127   10:05:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:08:22.127   10:05:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:22.127   10:05:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:22.127   10:05:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:08:22.127   10:05:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:08:22.127   10:05:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@373 -- # return 1
00:08:22.127   10:05:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:08:22.127   10:05:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:08:22.127   10:05:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # vm_is_running 2
00:08:22.127   10:05:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 2
00:08:22.127   10:05:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:22.127   10:05:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:22.127   10:05:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/2
00:08:22.127   10:05:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/2/qemu.pid ]]
00:08:22.127   10:05:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@373 -- # return 1
00:08:22.127   10:05:16 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:08:22.127   10:05:17 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@500 -- # sleep 1
00:08:23.060   10:05:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:08:23.060   10:05:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:08:23.060   10:05:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:08:23.060   10:05:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:08:23.060   10:05:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:23.060   10:05:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:23.060   10:05:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:23.060   10:05:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:23.060   10:05:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:23.060   10:05:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:08:23.060  INFO: All VMs successfully shut down
00:08:23.060   10:05:18 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@505 -- # return 0
00:08:23.060   10:05:18 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@79 -- # timing_enter clean_vfio_user
00:08:23.060   10:05:18 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:23.060   10:05:18 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:08:23.060    10:05:18 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@81 -- # seq 0 2
00:08:23.060   10:05:18 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@81 -- # for i in $(seq 0 $vm_no)
00:08:23.060   10:05:18 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@82 -- # vm_muser_dir=/root/vhost_test/vms/0/muser
00:08:23.060   10:05:18 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@83 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_remove_listener nqn.2019-07.io.spdk:cnode0 -t vfiouser -a /root/vhost_test/vms/0/muser/domain/muser0/0 -s 0
00:08:23.317   10:05:18 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@84 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_delete_subsystem nqn.2019-07.io.spdk:cnode0
00:08:23.573   10:05:18 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@85 -- # (( i == vm_no ))
00:08:23.573   10:05:18 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@88 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_malloc_delete Malloc0
00:08:23.831   10:05:18 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@81 -- # for i in $(seq 0 $vm_no)
00:08:23.831   10:05:18 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@82 -- # vm_muser_dir=/root/vhost_test/vms/1/muser
00:08:23.831   10:05:18 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@83 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_remove_listener nqn.2019-07.io.spdk:cnode1 -t vfiouser -a /root/vhost_test/vms/1/muser/domain/muser1/1 -s 0
00:08:24.090   10:05:19 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@84 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_delete_subsystem nqn.2019-07.io.spdk:cnode1
00:08:24.658   10:05:19 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@85 -- # (( i == vm_no ))
00:08:24.658   10:05:19 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@88 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_malloc_delete Malloc1
00:08:25.227   10:05:20 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@81 -- # for i in $(seq 0 $vm_no)
00:08:25.227   10:05:20 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@82 -- # vm_muser_dir=/root/vhost_test/vms/2/muser
00:08:25.227   10:05:20 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@83 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_remove_listener nqn.2019-07.io.spdk:cnode2 -t vfiouser -a /root/vhost_test/vms/2/muser/domain/muser2/2 -s 0
00:08:25.487   10:05:20 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@84 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_delete_subsystem nqn.2019-07.io.spdk:cnode2
00:08:25.745   10:05:20 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@85 -- # (( i == vm_no ))
00:08:25.745   10:05:20 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@86 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_detach_controller Nvme0
00:08:27.120   10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@92 -- # vhost_kill 0
00:08:27.120   10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@202 -- # local rc=0
00:08:27.120   10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@203 -- # local vhost_name=0
00:08:27.120   10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@205 -- # [[ -z 0 ]]
00:08:27.120   10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@210 -- # local vhost_dir
00:08:27.120    10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@211 -- # get_vhost_dir 0
00:08:27.120    10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@105 -- # local vhost_name=0
00:08:27.120    10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:08:27.120    10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:08:27.120   10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0
00:08:27.120   10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:08:27.120   10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]]
00:08:27.120   10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@219 -- # timing_enter vhost_kill
00:08:27.120   10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:27.120   10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:08:27.378   10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@220 -- # local vhost_pid
00:08:27.378    10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@221 -- # cat /root/vhost_test/vhost/0/vhost.pid
00:08:27.378   10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@221 -- # vhost_pid=1753458
00:08:27.378   10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@222 -- # notice 'killing vhost (PID 1753458) app'
00:08:27.378   10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'killing vhost (PID 1753458) app'
00:08:27.378   10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:27.378   10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:27.378   10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:27.378   10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:27.378   10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:27.378   10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: killing vhost (PID 1753458) app'
00:08:27.378  INFO: killing vhost (PID 1753458) app
00:08:27.378   10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@224 -- # kill -INT 1753458
00:08:27.378   10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@225 -- # notice 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:08:27.378   10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:08:27.378   10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:27.378   10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:27.378   10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:27.379   10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:27.379   10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:27.379   10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: sent SIGINT to vhost app - waiting 60 seconds to exit'
00:08:27.379  INFO: sent SIGINT to vhost app - waiting 60 seconds to exit
00:08:27.379   10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@226 -- # (( i = 0 ))
00:08:27.379   10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@226 -- # (( i < 60 ))
00:08:27.379   10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@227 -- # kill -0 1753458
00:08:27.379   10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@228 -- # echo .
00:08:27.379  .
00:08:27.379   10:05:22 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@229 -- # sleep 1
00:08:28.317   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@226 -- # (( i++ ))
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@226 -- # (( i < 60 ))
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@227 -- # kill -0 1753458
00:08:28.318  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 227: kill: (1753458) - No such process
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@231 -- # break
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@234 -- # kill -0 1753458
00:08:28.318  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 234: kill: (1753458) - No such process
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@239 -- # kill -0 1753458
00:08:28.318  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 239: kill: (1753458) - No such process
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@245 -- # is_pid_child 1753458
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1668 -- # local pid=1753458 _pid
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1670 -- # read -r _pid
00:08:28.318    10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1667 -- # jobs -pr
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1671 -- # (( pid == _pid ))
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1670 -- # read -r _pid
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1674 -- # return 1
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@257 -- # timing_exit vhost_kill
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@259 -- # rm -rf /root/vhost_test/vhost/0
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@261 -- # return 0
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@93 -- # timing_exit clean_vfio_user
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@94 -- # vhosttestfini
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@54 -- # '[' '' == iso ']'
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@1 -- # clean_vfio_user
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@6 -- # vm_kill_all
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@476 -- # local vm
00:08:28.318    10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@477 -- # vm_list_all
00:08:28.318    10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@466 -- # vms=()
00:08:28.318    10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@466 -- # local vms
00:08:28.318    10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:08:28.318    10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@468 -- # (( 3 > 0 ))
00:08:28.318    10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/0 /root/vhost_test/vms/1 /root/vhost_test/vms/2
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@477 -- # for vm in $(vm_list_all)
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@478 -- # vm_kill 0
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@442 -- # vm_num_is_valid 0
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@443 -- # local vm_dir=/root/vhost_test/vms/0
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@445 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@446 -- # return 0
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@477 -- # for vm in $(vm_list_all)
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@478 -- # vm_kill 1
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@442 -- # vm_num_is_valid 1
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@443 -- # local vm_dir=/root/vhost_test/vms/1
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@445 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@446 -- # return 0
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@477 -- # for vm in $(vm_list_all)
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@478 -- # vm_kill 2
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@442 -- # vm_num_is_valid 2
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@443 -- # local vm_dir=/root/vhost_test/vms/2
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@445 -- # [[ ! -r /root/vhost_test/vms/2/qemu.pid ]]
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@446 -- # return 0
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@481 -- # rm -rf /root/vhost_test/vms
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@7 -- # vhost_kill 0
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@202 -- # local rc=0
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@203 -- # local vhost_name=0
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@205 -- # [[ -z 0 ]]
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@210 -- # local vhost_dir
00:08:28.318    10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@211 -- # get_vhost_dir 0
00:08:28.318    10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@105 -- # local vhost_name=0
00:08:28.318    10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:08:28.318    10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]]
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@215 -- # warning 'no vhost pid file found'
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@90 -- # message WARN 'no vhost pid file found'
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=WARN
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'WARN: no vhost pid file found'
00:08:28.318  WARN: no vhost pid file found
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@216 -- # return 0
00:08:28.318  
00:08:28.318  real	1m4.081s
00:08:28.318  user	4m14.081s
00:08:28.318  sys	0m3.308s
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:08:28.318  ************************************
00:08:28.318  END TEST vfio_user_nvme_fio
00:08:28.318  ************************************
00:08:28.318   10:05:23 vfio_user_qemu -- vfio_user/vfio_user.sh@16 -- # run_test vfio_user_nvme_restart_vm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/vfio_user_restart_vm.sh
00:08:28.318   10:05:23 vfio_user_qemu -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:28.318   10:05:23 vfio_user_qemu -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:28.318   10:05:23 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x
00:08:28.318  ************************************
00:08:28.318  START TEST vfio_user_nvme_restart_vm
00:08:28.318  ************************************
00:08:28.318   10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/vfio_user_restart_vm.sh
00:08:28.577  * Looking for test storage...
00:08:28.577  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme
00:08:28.577    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:08:28.577     10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1693 -- # lcov --version
00:08:28.577     10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:08:28.577    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:08:28.578    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:28.578    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:28.578    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:28.578    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@336 -- # IFS=.-:
00:08:28.578    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@336 -- # read -ra ver1
00:08:28.578    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@337 -- # IFS=.-:
00:08:28.578    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@337 -- # read -ra ver2
00:08:28.578    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@338 -- # local 'op=<'
00:08:28.578    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@340 -- # ver1_l=2
00:08:28.578    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@341 -- # ver2_l=1
00:08:28.578    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:28.578    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@344 -- # case "$op" in
00:08:28.578    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@345 -- # : 1
00:08:28.578    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:28.578    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:28.578     10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@365 -- # decimal 1
00:08:28.578     10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@353 -- # local d=1
00:08:28.578     10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:28.578     10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@355 -- # echo 1
00:08:28.578    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@365 -- # ver1[v]=1
00:08:28.578     10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@366 -- # decimal 2
00:08:28.578     10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@353 -- # local d=2
00:08:28.578     10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:28.578     10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@355 -- # echo 2
00:08:28.578    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@366 -- # ver2[v]=2
00:08:28.578    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:28.578    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:28.578    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@368 -- # return 0
00:08:28.578    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:28.578    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:08:28.578  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:28.578  		--rc genhtml_branch_coverage=1
00:08:28.578  		--rc genhtml_function_coverage=1
00:08:28.578  		--rc genhtml_legend=1
00:08:28.578  		--rc geninfo_all_blocks=1
00:08:28.578  		--rc geninfo_unexecuted_blocks=1
00:08:28.578  		
00:08:28.578  		'
00:08:28.578    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:08:28.578  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:28.578  		--rc genhtml_branch_coverage=1
00:08:28.578  		--rc genhtml_function_coverage=1
00:08:28.578  		--rc genhtml_legend=1
00:08:28.578  		--rc geninfo_all_blocks=1
00:08:28.578  		--rc geninfo_unexecuted_blocks=1
00:08:28.578  		
00:08:28.578  		'
00:08:28.578    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:08:28.578  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:28.578  		--rc genhtml_branch_coverage=1
00:08:28.578  		--rc genhtml_function_coverage=1
00:08:28.578  		--rc genhtml_legend=1
00:08:28.578  		--rc geninfo_all_blocks=1
00:08:28.578  		--rc geninfo_unexecuted_blocks=1
00:08:28.578  		
00:08:28.578  		'
00:08:28.578    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:08:28.578  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:28.578  		--rc genhtml_branch_coverage=1
00:08:28.578  		--rc genhtml_function_coverage=1
00:08:28.578  		--rc genhtml_legend=1
00:08:28.578  		--rc geninfo_all_blocks=1
00:08:28.578  		--rc geninfo_unexecuted_blocks=1
00:08:28.578  		
00:08:28.578  		'
00:08:28.578   10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh
00:08:28.578    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/common.sh@6 -- # : 128
00:08:28.578    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/common.sh@7 -- # : 512
00:08:28.578    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:08:28.578     10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@6 -- # : false
00:08:28.578     10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@7 -- # : /root/vhost_test
00:08:28.578     10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@8 -- # : /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:08:28.578     10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@9 -- # : qemu-img
00:08:28.578      10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:08:28.578     10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:08:28.578     10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:08:28.578     10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:08:28.578     10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:08:28.578     10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:08:28.578     10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:08:28.578       10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/vfio_user_restart_vm.sh
00:08:28.578      10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme
00:08:28.578     10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme
00:08:28.578     10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:08:28.578     10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:08:28.578     10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:08:28.578     10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:08:28.578     10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:08:28.578      10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:08:28.578      10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@2 -- # vhost_0_main_core=0
00:08:28.578      10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:08:28.578      10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:08:28.578      10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:08:28.578      10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:08:28.578      10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:08:28.578      10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:08:28.578      10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:08:28.578      10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:08:28.578      10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:08:28.578      10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:08:28.578      10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:08:28.578      10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:08:28.578      10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:08:28.578      10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:08:28.578      10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:08:28.578      10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:08:28.578      10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:08:28.578      10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:08:28.578      10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:08:28.578      10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:08:28.578      10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:08:28.578      10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:08:28.578      10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:08:28.578      10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:08:28.578     10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:08:28.578      10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:08:28.578      10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:08:28.578      10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:08:28.578      10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:08:28.578      10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:08:28.579      10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:08:28.579       10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:08:28.579        10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/cgroups.sh@244 -- # check_cgroup
00:08:28.579        10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:08:28.579        10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:08:28.579        10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/cgroups.sh@10 -- # echo 2
00:08:28.579       10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:08:28.579    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:08:28.579    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]]
00:08:28.579    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:08:28.579   10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/common.sh
00:08:28.579   10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/autotest.config
00:08:28.579    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@1 -- # vhost_0_reactor_mask='[0-3]'
00:08:28.579    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@2 -- # vhost_0_main_core=0
00:08:28.579    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@4 -- # VM_0_qemu_mask=4-5
00:08:28.579    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:08:28.579    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@7 -- # VM_1_qemu_mask=6-7
00:08:28.579    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:08:28.579    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@10 -- # VM_2_qemu_mask=8-9
00:08:28.579    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:08:28.579   10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@13 -- # bdfs=($(get_nvme_bdfs))
00:08:28.579    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@13 -- # get_nvme_bdfs
00:08:28.579    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1498 -- # bdfs=()
00:08:28.579    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1498 -- # local bdfs
00:08:28.579    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:08:28.579     10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/gen_nvme.sh
00:08:28.579     10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:08:28.579    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:08:28.579    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:85:00.0
00:08:28.579    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@14 -- # get_vhost_dir 0
00:08:28.579    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:08:28.579    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:08:28.579    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:08:28.579   10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@14 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:08:28.579   10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@16 -- # trap clean_vfio_user EXIT
00:08:28.579   10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@18 -- # vhosttestinit
00:08:28.579   10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@37 -- # '[' '' == iso ']'
00:08:28.579   10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]]
00:08:28.579   10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:08:28.579   10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:08:28.579   10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@20 -- # vfio_user_run 0
00:08:28.579   10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@11 -- # local vhost_name=0
00:08:28.579   10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@12 -- # local vfio_user_dir nvmf_pid_file rpc_py
00:08:28.579    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@14 -- # get_vhost_dir 0
00:08:28.579    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:08:28.579    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:08:28.579    10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:08:28.579   10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@14 -- # vfio_user_dir=/root/vhost_test/vhost/0
00:08:28.579   10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@15 -- # nvmf_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:08:28.579   10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@16 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:08:28.579   10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@18 -- # mkdir -p /root/vhost_test/vhost/0
00:08:28.579   10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@20 -- # timing_enter vfio_user_start
00:08:28.579   10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:28.579   10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:08:28.579   10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@22 -- # nvmfpid=1761645
00:08:28.579   10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@23 -- # echo 1761645
00:08:28.579   10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/nvmf_tgt -r /root/vhost_test/vhost/0/rpc.sock -m 0xf -s 512
00:08:28.579   10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@25 -- # echo 'Process pid: 1761645'
00:08:28.579  Process pid: 1761645
00:08:28.579   10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@26 -- # echo 'waiting for app to run...'
00:08:28.579  waiting for app to run...
00:08:28.579   10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@27 -- # waitforlisten 1761645 /root/vhost_test/vhost/0/rpc.sock
00:08:28.579   10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@835 -- # '[' -z 1761645 ']'
00:08:28.579   10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@839 -- # local rpc_addr=/root/vhost_test/vhost/0/rpc.sock
00:08:28.579   10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:28.579   10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...'
00:08:28.579  Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...
00:08:28.579   10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:28.579   10:05:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:08:28.840  [2024-11-20 10:05:23.739927] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:08:28.840  [2024-11-20 10:05:23.740057] [ DPDK EAL parameters: nvmf --no-shconf -c 0xf -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1761645 ]
00:08:28.840  EAL: No free 2048 kB hugepages reported on node 1
00:08:29.100  [2024-11-20 10:05:24.006863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:08:29.100  [2024-11-20 10:05:24.111075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:08:29.100  [2024-11-20 10:05:24.111146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:08:29.100  [2024-11-20 10:05:24.111231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:29.100  [2024-11-20 10:05:24.111261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:08:29.668   10:05:24 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:29.668   10:05:24 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@868 -- # return 0
00:08:29.668   10:05:24 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@29 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_create_transport -t VFIOUSER
00:08:29.925   10:05:24 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@30 -- # timing_exit vfio_user_start
00:08:29.925   10:05:24 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@732 -- # xtrace_disable
00:08:29.925   10:05:24 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:08:29.925   10:05:24 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@22 -- # vm_muser_dir=/root/vhost_test/vms/1/muser
00:08:29.925   10:05:24 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@23 -- # rm -rf /root/vhost_test/vms/1/muser
00:08:29.925   10:05:24 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@24 -- # mkdir -p /root/vhost_test/vms/1/muser/domain/muser1/1
00:08:29.925   10:05:24 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@26 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_attach_controller -b Nvme0 -t pcie -a 0000:85:00.0
00:08:33.216  Nvme0n1
00:08:33.216   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@27 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -s SPDK001 -a
00:08:33.474   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@28 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Nvme0n1
00:08:33.732   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@29 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /root/vhost_test/vms/1/muser/domain/muser1/1 -s 0
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@31 -- # vm_setup --disk-type=vfio_user --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@518 -- # xtrace_disable
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:08:33.992  WARN: removing existing VM in '/root/vhost_test/vms/1'
00:08:33.992  INFO: Creating new VM in /root/vhost_test/vms/1
00:08:33.992  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:08:33.992  INFO: TASK MASK: 6-7
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@671 -- # local node_num=0
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@672 -- # local boot_disk_present=false
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:08:33.992  INFO: NUMA NODE: 0
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@677 -- # [[ -n '' ]]
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@686 -- # [[ -z '' ]]
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@701 -- # IFS=,
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@701 -- # read -r disk disk_type _
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@702 -- # [[ -z '' ]]
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@702 -- # disk_type=vfio_user
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@704 -- # case $disk_type in
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@758 -- # notice 'using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:08:33.992  INFO: using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@759 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/$vm_num/muser/domain/muser$disk/$disk/cntrl")
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@760 -- # [[ 1 == '' ]]
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@780 -- # [[ -n '' ]]
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@785 -- # (( 0 ))
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:08:33.992  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:08:33.992   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@787 -- # cat
00:08:33.992    10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/1/muser/domain/muser1/1/cntrl
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@827 -- # echo 10100
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@828 -- # echo 10101
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@829 -- # echo 10102
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@832 -- # [[ -z '' ]]
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@834 -- # echo 10104
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@835 -- # echo 101
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@837 -- # [[ -z '' ]]
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@838 -- # [[ -z '' ]]
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@32 -- # vm_run 1
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@843 -- # local run_all=false
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@844 -- # local vms_to_run=
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@846 -- # getopts a-: optchar
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@856 -- # false
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@859 -- # shift 0
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@860 -- # for vm in "$@"
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@861 -- # vm_num_is_valid 1
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@871 -- # vm_is_running 1
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@373 -- # return 1
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:08:33.993  INFO: running /root/vhost_test/vms/1/run.sh
00:08:33.993   10:05:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:08:33.993  Running VM in /root/vhost_test/vms/1
00:08:34.564  Waiting for QEMU pid file
00:08:34.564  [2024-11-20 10:05:29.656413] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: enabling controller
00:08:35.502  === qemu.log ===
00:08:35.502  === qemu.log ===
00:08:35.502   10:05:30 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@33 -- # vm_wait_for_boot 60 1
00:08:35.502   10:05:30 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@913 -- # assert_number 60
00:08:35.502   10:05:30 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:08:35.502   10:05:30 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@281 -- # return 0
00:08:35.503   10:05:30 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@915 -- # xtrace_disable
00:08:35.503   10:05:30 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:08:35.503  INFO: Waiting for VMs to boot
00:08:35.503  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:08:47.718  [2024-11-20 10:05:42.115789] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:08:47.718  [2024-11-20 10:05:42.124809] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:08:47.718  [2024-11-20 10:05:42.128833] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: enabling controller
00:08:57.687  
00:08:57.687  INFO: VM1 ready
00:08:57.687  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:08:57.687  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:08:57.687  INFO: all VMs ready
00:08:57.687   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@973 -- # return 0
00:08:57.687   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@35 -- # vm_exec 1 lsblk
00:08:57.687   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:08:57.687   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:57.687   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:08:57.687   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:08:57.687   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@339 -- # shift
00:08:57.687    10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:08:57.687    10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:08:57.687    10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:57.687    10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:08:57.687    10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:08:57.687    10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:08:57.687   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 lsblk
00:08:57.687  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:08:57.687  NAME    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
00:08:57.687  sda       8:0    0     5G  0 disk 
00:08:57.687  ├─sda1    8:1    0     1M  0 part 
00:08:57.687  ├─sda2    8:2    0  1000M  0 part /boot
00:08:57.687  ├─sda3    8:3    0   100M  0 part /boot/efi
00:08:57.687  ├─sda4    8:4    0     4M  0 part 
00:08:57.687  └─sda5    8:5    0   3.9G  0 part /home
00:08:57.687                                    /
00:08:57.687  zram0   252:0    0   946M  0 disk [SWAP]
00:08:57.687  nvme0n1 259:1    0 931.5G  0 disk 
00:08:57.687   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@37 -- # vm_shutdown_all
00:08:57.687   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:08:57.687   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:08:57.687    10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@489 -- # vm_list_all
00:08:57.687    10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@466 -- # vms=()
00:08:57.687    10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@466 -- # local vms
00:08:57.687    10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:08:57.687    10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:08:57.687    10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:08:57.687   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:08:57.687   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@492 -- # vm_shutdown 1
00:08:57.687   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:08:57.687   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:57.687   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:08:57.687   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:08:57.687   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:08:57.687   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@424 -- # vm_is_running 1
00:08:57.687   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:08:57.687   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:57.687   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:08:57.687   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:08:57.687   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:08:57.687   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:08:57.687    10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # vm_pid=1762309
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 1762309
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@380 -- # return 0
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:08:57.945  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@432 -- # set +e
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@339 -- # shift
00:08:57.945    10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:08:57.945    10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:08:57.945    10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:57.945    10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:08:57.945    10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:08:57.945    10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:08:57.945  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:08:57.945  INFO: VM1 is shutting down - wait a while to complete
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@435 -- # set -e
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:08:57.945  INFO: Waiting for VMs to shutdown...
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:08:57.945    10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # vm_pid=1762309
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 1762309
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@380 -- # return 0
00:08:57.945   10:05:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:08:58.881   10:05:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:08:58.881   10:05:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:08:58.881   10:05:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:08:58.881   10:05:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:08:58.881   10:05:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:58.881   10:05:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:08:58.881   10:05:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:08:58.881   10:05:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:08:58.881   10:05:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:08:58.881    10:05:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:08:59.139   10:05:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # vm_pid=1762309
00:08:59.139   10:05:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 1762309
00:08:59.139   10:05:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@380 -- # return 0
00:08:59.139   10:05:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:08:59.139  [2024-11-20 10:05:54.039134] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:09:00.072   10:05:55 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:09:00.072   10:05:55 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:09:00.072   10:05:55 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:09:00.072   10:05:55 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:09:00.072   10:05:55 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:00.072   10:05:55 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:00.072   10:05:55 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:09:00.072   10:05:55 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:09:00.072   10:05:55 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@373 -- # return 1
00:09:00.072   10:05:55 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:09:00.072   10:05:55 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:09:01.005   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:09:01.005   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:09:01.005   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:09:01.005   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:09:01.005   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:01.005   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:01.005   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:01.005   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:01.005   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:01.005   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:09:01.005  INFO: All VMs successfully shut down
00:09:01.005   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@505 -- # return 0
00:09:01.005   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@40 -- # vm_setup --disk-type=vfio_user --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1
00:09:01.005   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@518 -- # xtrace_disable
00:09:01.005   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:09:01.006  WARN: removing existing VM in '/root/vhost_test/vms/1'
00:09:01.006  INFO: Creating new VM in /root/vhost_test/vms/1
00:09:01.006  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:09:01.006  INFO: TASK MASK: 6-7
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@671 -- # local node_num=0
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@672 -- # local boot_disk_present=false
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:09:01.006  INFO: NUMA NODE: 0
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@677 -- # [[ -n '' ]]
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@686 -- # [[ -z '' ]]
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@701 -- # IFS=,
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@701 -- # read -r disk disk_type _
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@702 -- # [[ -z '' ]]
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@702 -- # disk_type=vfio_user
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@704 -- # case $disk_type in
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@758 -- # notice 'using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:09:01.006  INFO: using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@759 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/$vm_num/muser/domain/muser$disk/$disk/cntrl")
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@760 -- # [[ 1 == '' ]]
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@780 -- # [[ -n '' ]]
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@785 -- # (( 0 ))
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:09:01.006  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@787 -- # cat
00:09:01.006    10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/1/muser/domain/muser1/1/cntrl
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@827 -- # echo 10100
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@828 -- # echo 10101
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@829 -- # echo 10102
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@832 -- # [[ -z '' ]]
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@834 -- # echo 10104
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@835 -- # echo 101
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@837 -- # [[ -z '' ]]
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@838 -- # [[ -z '' ]]
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@41 -- # vm_run 1
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@843 -- # local run_all=false
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@844 -- # local vms_to_run=
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@846 -- # getopts a-: optchar
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@856 -- # false
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@859 -- # shift 0
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@860 -- # for vm in "$@"
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@861 -- # vm_num_is_valid 1
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@871 -- # vm_is_running 1
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@373 -- # return 1
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:09:01.006  INFO: running /root/vhost_test/vms/1/run.sh
00:09:01.006   10:05:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:09:01.006  Running VM in /root/vhost_test/vms/1
00:09:01.572  Waiting for QEMU pid file
00:09:01.830  [2024-11-20 10:05:56.746210] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: enabling controller
00:09:02.763  === qemu.log ===
00:09:02.763  === qemu.log ===
00:09:02.763   10:05:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@42 -- # vm_wait_for_boot 60 1
00:09:02.763   10:05:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@913 -- # assert_number 60
00:09:02.763   10:05:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:09:02.763   10:05:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@281 -- # return 0
00:09:02.763   10:05:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@915 -- # xtrace_disable
00:09:02.763   10:05:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:09:02.763  INFO: Waiting for VMs to boot
00:09:02.763  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:09:15.025  [2024-11-20 10:06:09.376396] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:09:15.025  [2024-11-20 10:06:09.385432] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:09:15.025  [2024-11-20 10:06:09.389443] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: enabling controller
00:09:23.136  
00:09:23.136  INFO: VM1 ready
00:09:23.136  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:09:23.394  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:09:24.328  INFO: all VMs ready
00:09:24.328   10:06:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@973 -- # return 0
00:09:24.328   10:06:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@44 -- # vm_exec 1 lsblk
00:09:24.328   10:06:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:09:24.328   10:06:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:24.328   10:06:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:24.328   10:06:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:09:24.328   10:06:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@339 -- # shift
00:09:24.328    10:06:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:09:24.328    10:06:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:09:24.328    10:06:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:24.328    10:06:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:24.328    10:06:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:09:24.328    10:06:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:09:24.328   10:06:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 lsblk
00:09:24.329  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:09:24.329  NAME    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
00:09:24.329  sda       8:0    0     5G  0 disk 
00:09:24.329  ├─sda1    8:1    0     1M  0 part 
00:09:24.329  ├─sda2    8:2    0  1000M  0 part /boot
00:09:24.329  ├─sda3    8:3    0   100M  0 part /boot/efi
00:09:24.329  ├─sda4    8:4    0     4M  0 part 
00:09:24.329  └─sda5    8:5    0   3.9G  0 part /home
00:09:24.329                                    /
00:09:24.329  zram0   252:0    0   946M  0 disk [SWAP]
00:09:24.329  nvme0n1 259:1    0 931.5G  0 disk 
00:09:24.329   10:06:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@47 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_remove_ns nqn.2019-07.io.spdk:cnode1 1
00:09:24.586   10:06:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@49 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_remove_listener nqn.2019-07.io.spdk:cnode1 -t vfiouser -a /root/vhost_test/vms/1/muser/domain/muser1/1 -s 0
00:09:24.844   10:06:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@53 -- # vm_exec 1 'echo 1 > /sys/class/nvme/nvme0/device/remove'
00:09:24.844   10:06:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:09:24.844   10:06:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:24.844   10:06:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:24.844   10:06:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:09:24.844   10:06:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@339 -- # shift
00:09:24.844    10:06:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:09:24.844    10:06:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:09:24.844    10:06:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:24.844    10:06:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:24.844    10:06:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:09:24.844    10:06:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:09:24.844   10:06:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'echo 1 > /sys/class/nvme/nvme0/device/remove'
00:09:25.103  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:09:25.103   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@55 -- # vm_shutdown_all
00:09:25.103   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:09:25.103   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:09:25.103    10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@489 -- # vm_list_all
00:09:25.103    10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@466 -- # vms=()
00:09:25.103    10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@466 -- # local vms
00:09:25.103    10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:09:25.103    10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:09:25.103    10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:09:25.103   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:09:25.103   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@492 -- # vm_shutdown 1
00:09:25.103   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:09:25.103   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:25.103   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:25.103   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:09:25.103   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:09:25.103   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@424 -- # vm_is_running 1
00:09:25.103   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:09:25.103   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:25.103   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:25.103   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:09:25.103   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:09:25.103   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:09:25.103    10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:09:25.103   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # vm_pid=1765511
00:09:25.103   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 1765511
00:09:25.103   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@380 -- # return 0
00:09:25.103   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:09:25.103   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:09:25.103   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:25.103   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:25.103   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:25.103   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:25.103   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:25.103   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:09:25.103  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:09:25.103   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@432 -- # set +e
00:09:25.103   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:09:25.103   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:09:25.103   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:25.103   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:25.103   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:09:25.103   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@339 -- # shift
00:09:25.103    10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:09:25.103    10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:09:25.103    10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:25.103    10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:25.103    10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:09:25.103    10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:09:25.103   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:09:25.103  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:09:25.362   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:09:25.362   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:09:25.362   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:25.362   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:25.362   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:25.362   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:25.362   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:25.363   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:09:25.363  INFO: VM1 is shutting down - wait a while to complete
00:09:25.363   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@435 -- # set -e
00:09:25.363   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:09:25.363   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:09:25.363   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:25.363   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:25.363   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:25.363   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:25.363   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:25.363   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:09:25.363  INFO: Waiting for VMs to shutdown...
00:09:25.363   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:09:25.363   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:09:25.363   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:09:25.363   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:09:25.363   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:25.363   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:25.363   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:09:25.363   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:09:25.363   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:09:25.363    10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:09:25.363   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # vm_pid=1765511
00:09:25.363   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 1765511
00:09:25.363   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@380 -- # return 0
00:09:25.363   10:06:20 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:09:26.298   10:06:21 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:09:26.298   10:06:21 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:09:26.298   10:06:21 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:09:26.298   10:06:21 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:09:26.298   10:06:21 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:26.298   10:06:21 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:26.298   10:06:21 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:09:26.298   10:06:21 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:09:26.298   10:06:21 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:09:26.298    10:06:21 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:09:26.298   10:06:21 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # vm_pid=1765511
00:09:26.298   10:06:21 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 1765511
00:09:26.298   10:06:21 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@380 -- # return 0
00:09:26.298   10:06:21 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:09:27.233   10:06:22 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:09:27.233   10:06:22 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:09:27.233   10:06:22 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:09:27.233   10:06:22 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:09:27.233   10:06:22 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:27.233   10:06:22 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:27.233   10:06:22 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:09:27.233   10:06:22 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:09:27.233   10:06:22 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@373 -- # return 1
00:09:27.233   10:06:22 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:09:27.233   10:06:22 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:09:28.606   10:06:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:09:28.606   10:06:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:09:28.606   10:06:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:09:28.606   10:06:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:09:28.606   10:06:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:28.606   10:06:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:28.606   10:06:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:28.606   10:06:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:28.606   10:06:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:28.606   10:06:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:09:28.606  INFO: All VMs successfully shut down
00:09:28.606   10:06:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@505 -- # return 0
00:09:28.606   10:06:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@57 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_detach_controller Nvme0
00:09:29.981   10:06:24 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@58 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_delete_subsystem nqn.2019-07.io.spdk:cnode1
00:09:30.239   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@60 -- # vhosttestfini
00:09:30.239   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@54 -- # '[' '' == iso ']'
00:09:30.239   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@1 -- # clean_vfio_user
00:09:30.239   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@6 -- # vm_kill_all
00:09:30.239   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@476 -- # local vm
00:09:30.239    10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@477 -- # vm_list_all
00:09:30.239    10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@466 -- # vms=()
00:09:30.239    10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@466 -- # local vms
00:09:30.239    10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:09:30.239    10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:09:30.239    10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:09:30.239   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@477 -- # for vm in $(vm_list_all)
00:09:30.239   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@478 -- # vm_kill 1
00:09:30.239   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@442 -- # vm_num_is_valid 1
00:09:30.239   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:30.239   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:30.239   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@443 -- # local vm_dir=/root/vhost_test/vms/1
00:09:30.239   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@445 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:09:30.239   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@446 -- # return 0
00:09:30.239   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@481 -- # rm -rf /root/vhost_test/vms
00:09:30.239   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@7 -- # vhost_kill 0
00:09:30.239   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@202 -- # local rc=0
00:09:30.239   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@203 -- # local vhost_name=0
00:09:30.239   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@205 -- # [[ -z 0 ]]
00:09:30.239   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@210 -- # local vhost_dir
00:09:30.239    10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@211 -- # get_vhost_dir 0
00:09:30.239    10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:09:30.239    10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:09:30.239    10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:09:30.240   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0
00:09:30.240   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:09:30.240   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]]
00:09:30.240   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@219 -- # timing_enter vhost_kill
00:09:30.240   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:30.240   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:09:30.240   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@220 -- # local vhost_pid
00:09:30.240    10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@221 -- # cat /root/vhost_test/vhost/0/vhost.pid
00:09:30.240   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@221 -- # vhost_pid=1761645
00:09:30.240   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@222 -- # notice 'killing vhost (PID 1761645) app'
00:09:30.240   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'killing vhost (PID 1761645) app'
00:09:30.240   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:30.240   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:30.240   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:30.240   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:30.240   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:30.240   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: killing vhost (PID 1761645) app'
00:09:30.240  INFO: killing vhost (PID 1761645) app
00:09:30.240   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@224 -- # kill -INT 1761645
00:09:30.240   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@225 -- # notice 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:09:30.240   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:09:30.240   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:30.240   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:30.240   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:30.240   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:30.240   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:30.240   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: sent SIGINT to vhost app - waiting 60 seconds to exit'
00:09:30.240  INFO: sent SIGINT to vhost app - waiting 60 seconds to exit
00:09:30.240   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@226 -- # (( i = 0 ))
00:09:30.240   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:09:30.240   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@227 -- # kill -0 1761645
00:09:30.240   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@228 -- # echo .
00:09:30.240  .
00:09:30.240   10:06:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@229 -- # sleep 1
00:09:31.179   10:06:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@226 -- # (( i++ ))
00:09:31.179   10:06:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:09:31.179   10:06:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@227 -- # kill -0 1761645
00:09:31.179  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 227: kill: (1761645) - No such process
00:09:31.179   10:06:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@231 -- # break
00:09:31.179   10:06:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@234 -- # kill -0 1761645
00:09:31.179  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 234: kill: (1761645) - No such process
00:09:31.179   10:06:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@239 -- # kill -0 1761645
00:09:31.179  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 239: kill: (1761645) - No such process
00:09:31.179   10:06:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@245 -- # is_pid_child 1761645
00:09:31.179   10:06:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1668 -- # local pid=1761645 _pid
00:09:31.179   10:06:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1670 -- # read -r _pid
00:09:31.179    10:06:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1667 -- # jobs -pr
00:09:31.179   10:06:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1671 -- # (( pid == _pid ))
00:09:31.179   10:06:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1670 -- # read -r _pid
00:09:31.179   10:06:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1674 -- # return 1
00:09:31.179   10:06:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@257 -- # timing_exit vhost_kill
00:09:31.179   10:06:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:31.179   10:06:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:09:31.440   10:06:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@259 -- # rm -rf /root/vhost_test/vhost/0
00:09:31.440   10:06:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@261 -- # return 0
00:09:31.440  
00:09:31.440  real	1m2.920s
00:09:31.440  user	4m6.417s
00:09:31.440  sys	0m1.982s
00:09:31.440   10:06:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:31.440   10:06:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:09:31.440  ************************************
00:09:31.440  END TEST vfio_user_nvme_restart_vm
00:09:31.440  ************************************
00:09:31.440   10:06:26 vfio_user_qemu -- vfio_user/vfio_user.sh@17 -- # run_test vfio_user_virtio_blk_restart_vm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_restart_vm.sh virtio_blk
00:09:31.440   10:06:26 vfio_user_qemu -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:09:31.440   10:06:26 vfio_user_qemu -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:31.440   10:06:26 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x
00:09:31.440  ************************************
00:09:31.440  START TEST vfio_user_virtio_blk_restart_vm
00:09:31.440  ************************************
00:09:31.440   10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_restart_vm.sh virtio_blk
00:09:31.440  * Looking for test storage...
00:09:31.440  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:09:31.440    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:09:31.440     10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1693 -- # lcov --version
00:09:31.440     10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:09:31.440    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:09:31.440    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:31.440    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:31.440    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:31.440    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@336 -- # IFS=.-:
00:09:31.440    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@336 -- # read -ra ver1
00:09:31.440    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@337 -- # IFS=.-:
00:09:31.440    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@337 -- # read -ra ver2
00:09:31.440    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@338 -- # local 'op=<'
00:09:31.440    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@340 -- # ver1_l=2
00:09:31.440    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@341 -- # ver2_l=1
00:09:31.440    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:31.440    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@344 -- # case "$op" in
00:09:31.440    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@345 -- # : 1
00:09:31.440    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:31.440    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:31.440     10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@365 -- # decimal 1
00:09:31.440     10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@353 -- # local d=1
00:09:31.440     10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:31.440     10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@355 -- # echo 1
00:09:31.440    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@365 -- # ver1[v]=1
00:09:31.440     10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@366 -- # decimal 2
00:09:31.440     10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@353 -- # local d=2
00:09:31.440     10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:31.440     10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@355 -- # echo 2
00:09:31.440    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@366 -- # ver2[v]=2
00:09:31.440    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:31.440    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:31.440    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@368 -- # return 0
00:09:31.440    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:31.440    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:09:31.440  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:31.440  		--rc genhtml_branch_coverage=1
00:09:31.440  		--rc genhtml_function_coverage=1
00:09:31.440  		--rc genhtml_legend=1
00:09:31.440  		--rc geninfo_all_blocks=1
00:09:31.440  		--rc geninfo_unexecuted_blocks=1
00:09:31.440  		
00:09:31.440  		'
00:09:31.440    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:09:31.440  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:31.440  		--rc genhtml_branch_coverage=1
00:09:31.440  		--rc genhtml_function_coverage=1
00:09:31.440  		--rc genhtml_legend=1
00:09:31.440  		--rc geninfo_all_blocks=1
00:09:31.440  		--rc geninfo_unexecuted_blocks=1
00:09:31.440  		
00:09:31.440  		'
00:09:31.440    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:09:31.440  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:31.440  		--rc genhtml_branch_coverage=1
00:09:31.440  		--rc genhtml_function_coverage=1
00:09:31.440  		--rc genhtml_legend=1
00:09:31.440  		--rc geninfo_all_blocks=1
00:09:31.440  		--rc geninfo_unexecuted_blocks=1
00:09:31.440  		
00:09:31.440  		'
00:09:31.440    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:09:31.440  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:31.440  		--rc genhtml_branch_coverage=1
00:09:31.440  		--rc genhtml_function_coverage=1
00:09:31.440  		--rc genhtml_legend=1
00:09:31.440  		--rc geninfo_all_blocks=1
00:09:31.440  		--rc geninfo_unexecuted_blocks=1
00:09:31.440  		
00:09:31.440  		'
00:09:31.440   10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh
00:09:31.440    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/common.sh@6 -- # : 128
00:09:31.440    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/common.sh@7 -- # : 512
00:09:31.440    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:09:31.440     10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@6 -- # : false
00:09:31.440     10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@7 -- # : /root/vhost_test
00:09:31.440     10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@8 -- # : /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:09:31.440     10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@9 -- # : qemu-img
00:09:31.440      10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:09:31.440     10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:09:31.440     10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:09:31.440     10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:09:31.440     10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:09:31.440     10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:09:31.440     10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:09:31.440       10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_restart_vm.sh
00:09:31.440      10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:09:31.440     10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:09:31.441     10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:09:31.441     10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:09:31.441     10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:09:31.441     10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:09:31.441     10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:09:31.441      10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:09:31.441      10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@2 -- # vhost_0_main_core=0
00:09:31.441      10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:09:31.441      10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:09:31.441      10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:09:31.441      10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:09:31.441      10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:09:31.441      10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:09:31.441      10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:09:31.441      10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:09:31.441      10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:09:31.441      10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:09:31.441      10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:09:31.441      10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:09:31.441      10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:09:31.441      10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:09:31.441      10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:09:31.441      10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:09:31.441      10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:09:31.441      10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:09:31.441      10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:09:31.441      10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:09:31.441      10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:09:31.441      10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:09:31.441      10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:09:31.441      10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:09:31.441     10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:09:31.441      10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:09:31.441      10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:09:31.441      10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:09:31.441      10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:09:31.441      10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:09:31.441      10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:09:31.441       10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:09:31.441        10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/cgroups.sh@244 -- # check_cgroup
00:09:31.441        10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:09:31.441        10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:09:31.441        10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/cgroups.sh@10 -- # echo 2
00:09:31.441       10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:09:31.441    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:09:31.441    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]]
00:09:31.441    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:09:31.441   10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/common.sh
00:09:31.441   10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@12 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/autotest.config
00:09:31.441    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@1 -- # vhost_0_reactor_mask='[0-3]'
00:09:31.441    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@2 -- # vhost_0_main_core=0
00:09:31.441    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@4 -- # VM_0_qemu_mask=4-5
00:09:31.441    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:09:31.441    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@7 -- # VM_1_qemu_mask=6-7
00:09:31.441    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:09:31.441    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@10 -- # VM_2_qemu_mask=8-9
00:09:31.441    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:09:31.441   10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@14 -- # bdfs=($(get_nvme_bdfs))
00:09:31.441    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@14 -- # get_nvme_bdfs
00:09:31.441    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1498 -- # bdfs=()
00:09:31.441    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1498 -- # local bdfs
00:09:31.441    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:09:31.441     10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/gen_nvme.sh
00:09:31.441     10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:09:31.700    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:09:31.700    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:85:00.0
00:09:31.700    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@15 -- # get_vhost_dir 0
00:09:31.700    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:09:31.700    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:09:31.700    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:09:31.700   10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@15 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:09:31.700   10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@17 -- # virtio_type=virtio_blk
00:09:31.700   10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@18 -- # [[ virtio_blk != virtio_blk ]]
00:09:31.700   10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@31 -- # vhosttestinit
00:09:31.700   10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@37 -- # '[' '' == iso ']'
00:09:31.700   10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]]
00:09:31.700   10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:09:31.700   10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:09:31.700   10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@33 -- # vfu_tgt_run 0
00:09:31.700   10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@6 -- # local vhost_name=0
00:09:31.700   10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@7 -- # local vfio_user_dir vfu_pid_file rpc_py
00:09:31.700    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@9 -- # get_vhost_dir 0
00:09:31.700    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:09:31.700    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:09:31.700    10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:09:31.700   10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@9 -- # vfio_user_dir=/root/vhost_test/vhost/0
00:09:31.700   10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@10 -- # vfu_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:09:31.700   10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@11 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:09:31.700   10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@13 -- # mkdir -p /root/vhost_test/vhost/0
00:09:31.700   10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@15 -- # timing_enter vfu_tgt_start
00:09:31.700   10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:31.700   10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:09:31.700   10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@17 -- # vfupid=1769864
00:09:31.700   10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@16 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -r /root/vhost_test/vhost/0/rpc.sock -m 0xf -s 512
00:09:31.700   10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@18 -- # echo 1769864
00:09:31.700   10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@20 -- # echo 'Process pid: 1769864'
00:09:31.700  Process pid: 1769864
00:09:31.700   10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@21 -- # echo 'waiting for app to run...'
00:09:31.700  waiting for app to run...
00:09:31.700   10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@22 -- # waitforlisten 1769864 /root/vhost_test/vhost/0/rpc.sock
00:09:31.700   10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@835 -- # '[' -z 1769864 ']'
00:09:31.701   10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@839 -- # local rpc_addr=/root/vhost_test/vhost/0/rpc.sock
00:09:31.701   10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:31.701   10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...'
00:09:31.701  Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...
00:09:31.701   10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:31.701   10:06:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:09:31.701  [2024-11-20 10:06:26.730554] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:09:31.701  [2024-11-20 10:06:26.730706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xf -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1769864 ]
00:09:31.701  EAL: No free 2048 kB hugepages reported on node 1
00:09:31.959  [2024-11-20 10:06:27.014007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:09:32.218  [2024-11-20 10:06:27.121126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:32.218  [2024-11-20 10:06:27.121204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:09:32.218  [2024-11-20 10:06:27.121282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:32.218  [2024-11-20 10:06:27.121305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:09:32.784   10:06:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:32.784   10:06:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@868 -- # return 0
00:09:32.784   10:06:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@24 -- # timing_exit vfu_tgt_start
00:09:32.784   10:06:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:32.784   10:06:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:09:33.042   10:06:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@35 -- # vfu_vm_dir=/root/vhost_test/vms/vfu_tgt
00:09:33.042   10:06:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@36 -- # rm -rf /root/vhost_test/vms/vfu_tgt
00:09:33.042   10:06:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@37 -- # mkdir -p /root/vhost_test/vms/vfu_tgt
00:09:33.042   10:06:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@39 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_attach_controller -b Nvme0 -t pcie -a 0000:85:00.0
00:09:36.325  Nvme0n1
00:09:36.325   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@42 -- # disk_no=1
00:09:36.325   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@43 -- # vm_num=1
00:09:36.325   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@44 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_tgt_set_base_path /root/vhost_test/vms/vfu_tgt
00:09:36.325   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@46 -- # [[ virtio_blk == \v\i\r\t\i\o\_\b\l\k ]]
00:09:36.325   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@47 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_virtio_create_blk_endpoint virtio.1 --bdev-name Nvme0n1 --num-queues=2 --qsize=512 --packed-ring
00:09:36.583   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@53 -- # vm_setup --disk-type=vfio_user_virtio --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1
00:09:36.583   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@518 -- # xtrace_disable
00:09:36.583   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:09:36.584  INFO: Creating new VM in /root/vhost_test/vms/1
00:09:36.584  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:09:36.584  INFO: TASK MASK: 6-7
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@671 -- # local node_num=0
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@672 -- # local boot_disk_present=false
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:09:36.584  INFO: NUMA NODE: 0
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@677 -- # [[ -n '' ]]
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@686 -- # [[ -z '' ]]
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@701 -- # IFS=,
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@701 -- # read -r disk disk_type _
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@702 -- # [[ -z '' ]]
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@702 -- # disk_type=vfio_user_virtio
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@704 -- # case $disk_type in
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@766 -- # notice 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:09:36.584  INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@767 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/vfu_tgt/virtio.$disk")
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@768 -- # [[ 1 == '' ]]
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@780 -- # [[ -n '' ]]
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@785 -- # (( 0 ))
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:09:36.584  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@787 -- # cat
00:09:36.584    10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/vfu_tgt/virtio.1
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@827 -- # echo 10100
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@828 -- # echo 10101
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@829 -- # echo 10102
00:09:36.584   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:09:36.843   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@832 -- # [[ -z '' ]]
00:09:36.843   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@834 -- # echo 10104
00:09:36.843   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@835 -- # echo 101
00:09:36.843   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@837 -- # [[ -z '' ]]
00:09:36.843   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@838 -- # [[ -z '' ]]
00:09:36.843   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@54 -- # vm_run 1
00:09:36.843   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:09:36.843   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@843 -- # local run_all=false
00:09:36.843   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@844 -- # local vms_to_run=
00:09:36.843   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@846 -- # getopts a-: optchar
00:09:36.843   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@856 -- # false
00:09:36.843   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@859 -- # shift 0
00:09:36.843   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@860 -- # for vm in "$@"
00:09:36.843   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@861 -- # vm_num_is_valid 1
00:09:36.843   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:36.843   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:36.843   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:09:36.843   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:09:36.843   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:09:36.843   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@871 -- # vm_is_running 1
00:09:36.843   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:09:36.843   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:36.843   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:36.843   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:09:36.843   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:09:36.843   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@373 -- # return 1
00:09:36.843   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:09:36.843   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:09:36.843   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:36.843   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:09:36.843   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:36.843   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:36.843   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:09:36.843   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:09:36.843  INFO: running /root/vhost_test/vms/1/run.sh
00:09:36.843   10:06:31 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:09:36.843  Running VM in /root/vhost_test/vms/1
00:09:37.102  [2024-11-20 10:06:32.091535] tgt_endpoint.c: 167:tgt_accept_poller: *NOTICE*: /root/vhost_test/vms/vfu_tgt/virtio.1: attached successfully
00:09:37.102  Waiting for QEMU pid file
00:09:38.476  === qemu.log ===
00:09:38.476  === qemu.log ===
00:09:38.476   10:06:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@55 -- # vm_wait_for_boot 60 1
00:09:38.476   10:06:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@913 -- # assert_number 60
00:09:38.476   10:06:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:09:38.476   10:06:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@281 -- # return 0
00:09:38.476   10:06:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@915 -- # xtrace_disable
00:09:38.476   10:06:33 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:09:38.476  INFO: Waiting for VMs to boot
00:09:38.476  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:10:00.396  
00:10:00.396  INFO: VM1 ready
00:10:00.396  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:00.396  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:00.396  INFO: all VMs ready
00:10:00.396   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@973 -- # return 0
00:10:00.396   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@58 -- # fio_bin=--fio-bin=/usr/src/fio-static/fio
00:10:00.396   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@59 -- # fio_disks=
00:10:00.396   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@60 -- # qemu_mask_param=VM_1_qemu_mask
00:10:00.396   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@62 -- # host_name=VM-1-6-7
00:10:00.396   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@63 -- # vm_exec 1 'hostname VM-1-6-7'
00:10:00.396   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:10:00.396   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:00.396   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:00.396   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:10:00.396   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:10:00.396    10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:10:00.396    10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:10:00.396    10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:00.396    10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:00.396    10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:10:00.396    10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:10:00.396   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'hostname VM-1-6-7'
00:10:00.396  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:00.655   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@64 -- # vm_start_fio_server --fio-bin=/usr/src/fio-static/fio 1
00:10:00.655   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@977 -- # local OPTIND optchar
00:10:00.656   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@978 -- # local readonly=
00:10:00.656   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@979 -- # local fio_bin=
00:10:00.656   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@980 -- # getopts :-: optchar
00:10:00.656   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@981 -- # case "$optchar" in
00:10:00.656   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@983 -- # case "$OPTARG" in
00:10:00.656   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@984 -- # local fio_bin=/usr/src/fio-static/fio
00:10:00.656   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@980 -- # getopts :-: optchar
00:10:00.656   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@993 -- # shift 1
00:10:00.656   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@994 -- # for vm_num in "$@"
00:10:00.656   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@995 -- # notice 'Starting fio server on VM1'
00:10:00.656   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Starting fio server on VM1'
00:10:00.656   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:00.656   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:00.656   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:00.656   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:00.656   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:00.656   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Starting fio server on VM1'
00:10:00.656  INFO: Starting fio server on VM1
00:10:00.656   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@996 -- # [[ /usr/src/fio-static/fio != '' ]]
00:10:00.656   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@997 -- # vm_exec 1 'cat > /root/fio; chmod +x /root/fio'
00:10:00.656   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:10:00.656   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:00.656   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:00.656   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:10:00.656   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:10:00.656    10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:10:00.656    10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:10:00.656    10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:00.656    10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:00.656    10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:10:00.656    10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:10:00.656   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/fio; chmod +x /root/fio'
00:10:00.656  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:00.915   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@998 -- # vm_exec 1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:10:00.915   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:10:00.915   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:00.915   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:00.915   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:10:00.915   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:10:00.915    10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:10:00.915    10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:10:00.915    10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:00.915    10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:00.915    10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:10:00.915    10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:10:00.915   10:06:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:10:00.915  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:00.915   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@66 -- # disks_before_restart=
00:10:00.915   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@67 -- # get_disks virtio_blk 1
00:10:00.915   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@24 -- # [[ virtio_blk == \v\i\r\t\i\o\_\s\c\s\i ]]
00:10:00.915   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@26 -- # [[ virtio_blk == \v\i\r\t\i\o\_\b\l\k ]]
00:10:00.915   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@27 -- # vm_check_blk_location 1
00:10:00.915   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1035 -- # local 'script=shopt -s nullglob; cd /sys/block; echo vd*'
00:10:00.915    10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1036 -- # echo 'shopt -s nullglob; cd /sys/block; echo vd*'
00:10:00.915    10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1036 -- # vm_exec 1 bash -s
00:10:00.915    10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:10:00.915    10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:00.915    10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:00.915    10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:10:00.915    10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:10:00.915     10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:10:00.915     10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:10:00.915     10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:00.915     10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:00.915     10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:10:00.915     10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:10:01.174    10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 bash -s
00:10:01.174  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1036 -- # SCSI_DISK=vda
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1038 -- # [[ -z vda ]]
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@68 -- # disks_before_restart=vda
00:10:01.174    10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@70 -- # printf :/dev/%s vda
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@70 -- # fio_disks=' --vm=1:/dev/vda'
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@71 -- # job_file=default_integrity.job
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@74 -- # run_fio --fio-bin=/usr/src/fio-static/fio --job-file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job --out=/root/vhost_test/fio_results --vm=1:/dev/vda
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1053 -- # local arg
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1054 -- # local job_file=
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1055 -- # local fio_bin=
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1056 -- # vms=()
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1056 -- # local vms
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1057 -- # local out=
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1058 -- # local vm
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1059 -- # local run_server_mode=true
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1060 -- # local run_plugin_mode=false
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1061 -- # local fio_start_cmd
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1062 -- # local fio_output_format=normal
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1063 -- # local fio_gtod_reduce=false
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1064 -- # local wait_for_fio=true
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1069 -- # local fio_bin=/usr/src/fio-static/fio
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1068 -- # local job_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1072 -- # local out=/root/vhost_test/fio_results
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1073 -- # mkdir -p /root/vhost_test/fio_results
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1070 -- # vms+=("${arg#*=}")
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1092 -- # [[ -n /usr/src/fio-static/fio ]]
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1092 -- # [[ ! -r /usr/src/fio-static/fio ]]
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1097 -- # [[ -z /usr/src/fio-static/fio ]]
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1101 -- # [[ ! -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job ]]
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1106 -- # fio_start_cmd='/usr/src/fio-static/fio --eta=never '
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1108 -- # local job_fname
00:10:01.174    10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1109 -- # basename /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1109 -- # job_fname=default_integrity.job
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1110 -- # log_fname=default_integrity.log
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1111 -- # fio_start_cmd+=' --output=/root/vhost_test/fio_results/default_integrity.log --output-format=normal '
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1114 -- # for vm in "${vms[@]}"
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1115 -- # local vm_num=1
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1116 -- # local vmdisks=/dev/vda
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1118 -- # sed 's@filename=@filename=/dev/vda@;s@description=\(.*\)@description=\1 (VM=1)@' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1119 -- # vm_exec 1 'cat > /root/default_integrity.job'
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:10:01.174    10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:10:01.174    10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:10:01.174    10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:01.174    10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:01.174    10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:10:01.174    10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:10:01.174   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/default_integrity.job'
00:10:01.174  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:01.433   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1121 -- # false
00:10:01.433   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1125 -- # vm_exec 1 cat /root/default_integrity.job
00:10:01.433   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:10:01.433   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:01.433   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:01.433   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:10:01.433   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:10:01.433    10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:10:01.433    10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:10:01.433    10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:01.433    10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:01.433    10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:10:01.433    10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:10:01.433   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 cat /root/default_integrity.job
00:10:01.433  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:01.433  [global]
00:10:01.433  blocksize_range=4k-512k
00:10:01.433  iodepth=512
00:10:01.433  iodepth_batch=128
00:10:01.433  iodepth_low=256
00:10:01.433  ioengine=libaio
00:10:01.433  size=1G
00:10:01.433  io_size=4G
00:10:01.433  filename=/dev/vda
00:10:01.433  group_reporting
00:10:01.433  thread
00:10:01.433  numjobs=1
00:10:01.433  direct=1
00:10:01.433  rw=randwrite
00:10:01.433  do_verify=1
00:10:01.433  verify=md5
00:10:01.433  verify_backlog=1024
00:10:01.433  fsync_on_close=1
00:10:01.433  verify_state_save=0
00:10:01.433  [nvme-host]
00:10:01.433   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1127 -- # true
00:10:01.433    10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1128 -- # vm_fio_socket 1
00:10:01.433    10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@326 -- # vm_num_is_valid 1
00:10:01.433    10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:01.433    10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:01.433    10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@327 -- # local vm_dir=/root/vhost_test/vms/1
00:10:01.433    10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@329 -- # cat /root/vhost_test/vms/1/fio_socket
00:10:01.433   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1128 -- # fio_start_cmd+='--client=127.0.0.1,10101 --remote-config /root/default_integrity.job '
00:10:01.433   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1131 -- # true
00:10:01.433   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1147 -- # true
00:10:01.433   10:06:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1161 -- # /usr/src/fio-static/fio --eta=never --output=/root/vhost_test/fio_results/default_integrity.log --output-format=normal --client=127.0.0.1,10101 --remote-config /root/default_integrity.job
00:10:16.306   10:07:09 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1162 -- # sleep 1
00:10:16.306   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1164 -- # [[ normal == \j\s\o\n ]]
00:10:16.306   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1172 -- # [[ ! -n '' ]]
00:10:16.306   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1173 -- # cat /root/vhost_test/fio_results/default_integrity.log
00:10:16.306  hostname=VM-1-6-7, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-3.35, flags=1
00:10:16.306  <VM-1-6-7> nvme-host: (g=0): rw=randwrite, bs=(R) 4096B-512KiB, (W) 4096B-512KiB, (T) 4096B-512KiB, ioengine=libaio, iodepth=512
00:10:16.306  <VM-1-6-7> Starting 1 thread
00:10:16.306  <VM-1-6-7> 
00:10:16.306  nvme-host: (groupid=0, jobs=1): err= 0: pid=940: Wed Nov 20 10:07:09 2024
00:10:16.306    read: IOPS=1019, BW=171MiB/s (179MB/s)(2048MiB/11974msec)
00:10:16.306      slat (usec): min=49, max=125047, avg=25488.55, stdev=31659.41
00:10:16.306      clat (msec): min=5, max=466, avg=202.05, stdev=76.78
00:10:16.306       lat (msec): min=5, max=467, avg=227.54, stdev=89.31
00:10:16.306      clat percentiles (msec):
00:10:16.306       |  1.00th=[   12],  5.00th=[   61], 10.00th=[   81], 20.00th=[  136],
00:10:16.306       | 30.00th=[  171], 40.00th=[  197], 50.00th=[  220], 60.00th=[  232],
00:10:16.306       | 70.00th=[  247], 80.00th=[  264], 90.00th=[  288], 95.00th=[  313],
00:10:16.306       | 99.00th=[  347], 99.50th=[  359], 99.90th=[  460], 99.95th=[  464],
00:10:16.306       | 99.99th=[  468]
00:10:16.306    write: IOPS=1097, BW=184MiB/s (193MB/s)(2048MiB/11125msec); 0 zone resets
00:10:16.306      slat (usec): min=260, max=74886, avg=22113.40, stdev=15040.88
00:10:16.306      clat (msec): min=5, max=330, avg=133.12, stdev=65.27
00:10:16.306       lat (msec): min=6, max=340, avg=155.23, stdev=67.64
00:10:16.306      clat percentiles (msec):
00:10:16.306       |  1.00th=[    7],  5.00th=[   31], 10.00th=[   54], 20.00th=[   79],
00:10:16.306       | 30.00th=[   96], 40.00th=[  112], 50.00th=[  125], 60.00th=[  140],
00:10:16.306       | 70.00th=[  161], 80.00th=[  190], 90.00th=[  226], 95.00th=[  249],
00:10:16.306       | 99.00th=[  296], 99.50th=[  309], 99.90th=[  317], 99.95th=[  321],
00:10:16.306       | 99.99th=[  330]
00:10:16.306     bw (  KiB/s): min= 8192, max=394168, per=100.00%, avg=209715.20, stdev=133142.95, samples=20
00:10:16.306     iops        : min=   48, max= 2048, avg=1220.80, stdev=819.88, samples=20
00:10:16.306    lat (msec)   : 10=0.79%, 20=1.41%, 50=4.13%, 100=17.41%, 250=60.13%
00:10:16.306    lat (msec)   : 500=16.13%
00:10:16.306    cpu          : usr=73.59%, sys=1.50%, ctx=1050, majf=0, minf=34
00:10:16.306    IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.5%, >=64=99.1%
00:10:16.306       submit    : 0=0.0%, 4=0.0%, 8=1.2%, 16=0.0%, 32=0.0%, 64=19.2%, >=64=79.6%
00:10:16.306       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:10:16.306       issued rwts: total=12208,12208,0,0 short=0,0,0,0 dropped=0,0,0,0
00:10:16.306       latency   : target=0, window=0, percentile=100.00%, depth=512
00:10:16.306  
00:10:16.306  Run status group 0 (all jobs):
00:10:16.306     READ: bw=171MiB/s (179MB/s), 171MiB/s-171MiB/s (179MB/s-179MB/s), io=2048MiB (2147MB), run=11974-11974msec
00:10:16.306    WRITE: bw=184MiB/s (193MB/s), 184MiB/s-184MiB/s (193MB/s-193MB/s), io=2048MiB (2147MB), run=11125-11125msec
00:10:16.306  
00:10:16.306  Disk stats (read/write):
00:10:16.306    vda: ios=12311/12141, merge=71/72, ticks=1034568/144988, in_queue=1179557, util=55.63%
00:10:16.306   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@77 -- # notice 'Shutting down virtual machine...'
00:10:16.306   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine...'
00:10:16.306   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:16.306   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:16.306   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:16.306   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:16.306   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:16.307   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine...'
00:10:16.307  INFO: Shutting down virtual machine...
00:10:16.307   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@78 -- # vm_shutdown_all
00:10:16.307   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:10:16.307   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:10:16.307    10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@489 -- # vm_list_all
00:10:16.307    10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@466 -- # vms=()
00:10:16.307    10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@466 -- # local vms
00:10:16.307    10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:10:16.307    10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:10:16.307    10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:10:16.307   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:10:16.307   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@492 -- # vm_shutdown 1
00:10:16.307   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:10:16.307   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:16.307   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:16.307   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:10:16.307   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:10:16.307   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@424 -- # vm_is_running 1
00:10:16.307   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:10:16.307   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:16.307   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:16.307   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:10:16.307   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:10:16.307   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:10:16.307    10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:10:16.307   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # vm_pid=1770583
00:10:16.307   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 1770583
00:10:16.307   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@380 -- # return 0
00:10:16.307   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:10:16.307   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:10:16.307   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:16.307   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:16.307   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:16.307   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:16.307   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:16.307   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:10:16.307  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:10:16.307   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@432 -- # set +e
00:10:16.307   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:10:16.307   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:10:16.307   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:16.307   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:16.307   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:10:16.307   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:10:16.307    10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:10:16.307    10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:10:16.307    10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:16.307    10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:16.307    10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:10:16.307    10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:10:16.307   10:07:10 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:10:16.307  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:16.307   10:07:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:10:16.307   10:07:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:10:16.307   10:07:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:16.307   10:07:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:16.307   10:07:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:16.307   10:07:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:16.307   10:07:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:16.307   10:07:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:10:16.307  INFO: VM1 is shutting down - wait a while to complete
00:10:16.307   10:07:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@435 -- # set -e
00:10:16.307   10:07:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:10:16.307   10:07:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:10:16.307   10:07:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:16.307   10:07:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:16.307   10:07:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:16.307   10:07:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:16.307   10:07:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:16.307   10:07:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:10:16.307  INFO: Waiting for VMs to shutdown...
00:10:16.307   10:07:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:10:16.307   10:07:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:10:16.307   10:07:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:10:16.307   10:07:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:10:16.307   10:07:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:16.307   10:07:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:16.307   10:07:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:10:16.307   10:07:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:10:16.307   10:07:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:10:16.307    10:07:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:10:16.307   10:07:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # vm_pid=1770583
00:10:16.307   10:07:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 1770583
00:10:16.307   10:07:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@380 -- # return 0
00:10:16.307   10:07:11 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:10:17.242   10:07:12 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:10:17.242   10:07:12 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:10:17.242   10:07:12 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:10:17.242   10:07:12 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:10:17.242   10:07:12 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:17.242   10:07:12 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:17.242   10:07:12 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:10:17.242   10:07:12 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:10:17.242   10:07:12 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:10:17.242    10:07:12 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:10:17.242   10:07:12 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # vm_pid=1770583
00:10:17.242   10:07:12 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 1770583
00:10:17.242   10:07:12 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@380 -- # return 0
00:10:17.242   10:07:12 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:10:18.176   10:07:13 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:10:18.176   10:07:13 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:10:18.176   10:07:13 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:10:18.176   10:07:13 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:10:18.176   10:07:13 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:18.176   10:07:13 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:18.176   10:07:13 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:10:18.176   10:07:13 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:10:18.176   10:07:13 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@373 -- # return 1
00:10:18.176   10:07:13 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:10:18.176   10:07:13 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:10:19.110   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:10:19.110   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:10:19.110   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:10:19.110   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:10:19.110   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:19.110   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:19.110   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:19.110   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:19.110   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:19.110   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:10:19.110  INFO: All VMs successfully shut down
00:10:19.110   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@505 -- # return 0
00:10:19.110   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@81 -- # vm_setup --disk-type=vfio_user_virtio --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1
00:10:19.110   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@518 -- # xtrace_disable
00:10:19.110   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:10:19.110  WARN: removing existing VM in '/root/vhost_test/vms/1'
00:10:19.110  INFO: Creating new VM in /root/vhost_test/vms/1
00:10:19.110  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:10:19.110  INFO: TASK MASK: 6-7
00:10:19.110   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@671 -- # local node_num=0
00:10:19.110   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@672 -- # local boot_disk_present=false
00:10:19.110   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:10:19.110   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:10:19.110   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:19.110   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:19.110   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:19.110   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:19.110   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:10:19.111  INFO: NUMA NODE: 0
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@677 -- # [[ -n '' ]]
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@686 -- # [[ -z '' ]]
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@701 -- # IFS=,
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@701 -- # read -r disk disk_type _
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@702 -- # [[ -z '' ]]
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@702 -- # disk_type=vfio_user_virtio
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@704 -- # case $disk_type in
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@766 -- # notice 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:10:19.111  INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@767 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/vfu_tgt/virtio.$disk")
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@768 -- # [[ 1 == '' ]]
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@780 -- # [[ -n '' ]]
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@785 -- # (( 0 ))
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:10:19.111  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@787 -- # cat
00:10:19.111    10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/vfu_tgt/virtio.1
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@827 -- # echo 10100
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@828 -- # echo 10101
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@829 -- # echo 10102
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@832 -- # [[ -z '' ]]
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@834 -- # echo 10104
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@835 -- # echo 101
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@837 -- # [[ -z '' ]]
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@838 -- # [[ -z '' ]]
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@82 -- # vm_run 1
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@843 -- # local run_all=false
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@844 -- # local vms_to_run=
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@846 -- # getopts a-: optchar
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@856 -- # false
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@859 -- # shift 0
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@860 -- # for vm in "$@"
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@861 -- # vm_num_is_valid 1
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@871 -- # vm_is_running 1
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@373 -- # return 1
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:10:19.111  INFO: running /root/vhost_test/vms/1/run.sh
00:10:19.111   10:07:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:10:19.111  Running VM in /root/vhost_test/vms/1
00:10:19.682  [2024-11-20 10:07:14.550606] tgt_endpoint.c: 167:tgt_accept_poller: *NOTICE*: /root/vhost_test/vms/vfu_tgt/virtio.1: attached successfully
00:10:19.682  Waiting for QEMU pid file
00:10:20.676  === qemu.log ===
00:10:20.676  === qemu.log ===
00:10:20.676   10:07:15 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@83 -- # vm_wait_for_boot 60 1
00:10:20.676   10:07:15 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@913 -- # assert_number 60
00:10:20.676   10:07:15 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:10:20.676   10:07:15 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@281 -- # return 0
00:10:20.676   10:07:15 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@915 -- # xtrace_disable
00:10:20.676   10:07:15 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:10:20.676  INFO: Waiting for VMs to boot
00:10:20.676  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:10:42.596  
00:10:42.596  INFO: VM1 ready
00:10:42.596  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:42.596  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:43.163  INFO: all VMs ready
00:10:43.163   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@973 -- # return 0
00:10:43.163   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@86 -- # disks_after_restart=
00:10:43.163   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@87 -- # get_disks virtio_blk 1
00:10:43.163   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@24 -- # [[ virtio_blk == \v\i\r\t\i\o\_\s\c\s\i ]]
00:10:43.163   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@26 -- # [[ virtio_blk == \v\i\r\t\i\o\_\b\l\k ]]
00:10:43.163   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@27 -- # vm_check_blk_location 1
00:10:43.163   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1035 -- # local 'script=shopt -s nullglob; cd /sys/block; echo vd*'
00:10:43.163    10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1036 -- # echo 'shopt -s nullglob; cd /sys/block; echo vd*'
00:10:43.163    10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1036 -- # vm_exec 1 bash -s
00:10:43.163    10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:10:43.163    10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:43.163    10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:43.163    10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:10:43.163    10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:10:43.163     10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:10:43.163     10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:10:43.163     10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:43.163     10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:43.163     10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:10:43.163     10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:10:43.163    10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 bash -s
00:10:43.163  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1036 -- # SCSI_DISK=vda
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1038 -- # [[ -z vda ]]
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@88 -- # disks_after_restart=vda
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@90 -- # [[ vda != \v\d\a ]]
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@96 -- # notice 'Shutting down virtual machine...'
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine...'
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine...'
00:10:43.421  INFO: Shutting down virtual machine...
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@97 -- # vm_shutdown_all
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:10:43.421    10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@489 -- # vm_list_all
00:10:43.421    10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@466 -- # vms=()
00:10:43.421    10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@466 -- # local vms
00:10:43.421    10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:10:43.421    10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:10:43.421    10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@492 -- # vm_shutdown 1
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@424 -- # vm_is_running 1
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:10:43.421    10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # vm_pid=1775640
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 1775640
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@380 -- # return 0
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:10:43.421  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@432 -- # set +e
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:10:43.421    10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:10:43.421    10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:10:43.421    10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:43.421    10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:43.421    10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:10:43.421    10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:10:43.421  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:10:43.421  INFO: VM1 is shutting down - wait a while to complete
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@435 -- # set -e
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:10:43.421  INFO: Waiting for VMs to shutdown...
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:10:43.421    10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # vm_pid=1775640
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 1775640
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@380 -- # return 0
00:10:43.421   10:07:38 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:10:44.792   10:07:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:10:44.792   10:07:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:10:44.792   10:07:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:10:44.792   10:07:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:10:44.792   10:07:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:44.792   10:07:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:44.792   10:07:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:10:44.792   10:07:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:10:44.792   10:07:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:10:44.792    10:07:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:10:44.792   10:07:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # vm_pid=1775640
00:10:44.792   10:07:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 1775640
00:10:44.792   10:07:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@380 -- # return 0
00:10:44.792   10:07:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:10:45.724   10:07:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:10:45.724   10:07:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:10:45.724   10:07:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:10:45.724   10:07:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:10:45.724   10:07:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:45.724   10:07:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:45.724   10:07:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:10:45.724   10:07:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:10:45.724   10:07:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@373 -- # return 1
00:10:45.724   10:07:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:10:45.724   10:07:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:10:46.656   10:07:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:10:46.656   10:07:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:10:46.656   10:07:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:10:46.657   10:07:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:10:46.657   10:07:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:46.657   10:07:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:46.657   10:07:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:46.657   10:07:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:46.657   10:07:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:46.657   10:07:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:10:46.657  INFO: All VMs successfully shut down
00:10:46.657   10:07:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@505 -- # return 0
00:10:46.657   10:07:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@99 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_detach_controller Nvme0
00:10:46.914  [2024-11-20 10:07:41.776183] vfu_virtio_blk.c: 384:bdev_event_cb: *NOTICE*: bdev name (Nvme0n1) received event(SPDK_BDEV_EVENT_REMOVE)
00:10:48.287   10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@101 -- # vhost_kill 0
00:10:48.287   10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@202 -- # local rc=0
00:10:48.287   10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@203 -- # local vhost_name=0
00:10:48.287   10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@205 -- # [[ -z 0 ]]
00:10:48.287   10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@210 -- # local vhost_dir
00:10:48.287    10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@211 -- # get_vhost_dir 0
00:10:48.287    10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:10:48.287    10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:10:48.287    10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:10:48.287   10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0
00:10:48.287   10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:10:48.287   10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]]
00:10:48.287   10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@219 -- # timing_enter vhost_kill
00:10:48.287   10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@726 -- # xtrace_disable
00:10:48.287   10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:10:48.287   10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@220 -- # local vhost_pid
00:10:48.287    10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@221 -- # cat /root/vhost_test/vhost/0/vhost.pid
00:10:48.287   10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@221 -- # vhost_pid=1769864
00:10:48.287   10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@222 -- # notice 'killing vhost (PID 1769864) app'
00:10:48.287   10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'killing vhost (PID 1769864) app'
00:10:48.287   10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:48.287   10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:48.287   10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:48.287   10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:48.287   10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:48.287   10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: killing vhost (PID 1769864) app'
00:10:48.287  INFO: killing vhost (PID 1769864) app
00:10:48.287   10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@224 -- # kill -INT 1769864
00:10:48.287   10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@225 -- # notice 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:10:48.287   10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:10:48.287   10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:48.287   10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:48.287   10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:48.287   10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:48.287   10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:48.287   10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: sent SIGINT to vhost app - waiting 60 seconds to exit'
00:10:48.287  INFO: sent SIGINT to vhost app - waiting 60 seconds to exit
00:10:48.287   10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@226 -- # (( i = 0 ))
00:10:48.287   10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:10:48.287   10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@227 -- # kill -0 1769864
00:10:48.287   10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@228 -- # echo .
00:10:48.287  .
00:10:48.287   10:07:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@229 -- # sleep 1
00:10:49.222   10:07:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@226 -- # (( i++ ))
00:10:49.222   10:07:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:10:49.222   10:07:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@227 -- # kill -0 1769864
00:10:49.222   10:07:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@228 -- # echo .
00:10:49.222  .
00:10:49.222   10:07:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@229 -- # sleep 1
00:10:50.156   10:07:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@226 -- # (( i++ ))
00:10:50.156   10:07:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:10:50.156   10:07:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@227 -- # kill -0 1769864
00:10:50.156  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 227: kill: (1769864) - No such process
00:10:50.156   10:07:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@231 -- # break
00:10:50.156   10:07:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@234 -- # kill -0 1769864
00:10:50.156  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 234: kill: (1769864) - No such process
00:10:50.156   10:07:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@239 -- # kill -0 1769864
00:10:50.156  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 239: kill: (1769864) - No such process
00:10:50.156   10:07:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@245 -- # is_pid_child 1769864
00:10:50.156   10:07:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1668 -- # local pid=1769864 _pid
00:10:50.156   10:07:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1670 -- # read -r _pid
00:10:50.156    10:07:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1667 -- # jobs -pr
00:10:50.156   10:07:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1671 -- # (( pid == _pid ))
00:10:50.156   10:07:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1670 -- # read -r _pid
00:10:50.156   10:07:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1674 -- # return 1
00:10:50.156   10:07:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@257 -- # timing_exit vhost_kill
00:10:50.156   10:07:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@732 -- # xtrace_disable
00:10:50.156   10:07:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:10:50.156   10:07:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@259 -- # rm -rf /root/vhost_test/vhost/0
00:10:50.156   10:07:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@261 -- # return 0
00:10:50.156   10:07:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@103 -- # vhosttestfini
00:10:50.156   10:07:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@54 -- # '[' '' == iso ']'
00:10:50.156  
00:10:50.156  real	1m18.828s
00:10:50.156  user	5m8.299s
00:10:50.156  sys	0m2.191s
00:10:50.156   10:07:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:50.156   10:07:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:10:50.156  ************************************
00:10:50.156  END TEST vfio_user_virtio_blk_restart_vm
00:10:50.156  ************************************
00:10:50.156   10:07:45 vfio_user_qemu -- vfio_user/vfio_user.sh@18 -- # run_test vfio_user_virtio_scsi_restart_vm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_restart_vm.sh virtio_scsi
00:10:50.156   10:07:45 vfio_user_qemu -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:10:50.156   10:07:45 vfio_user_qemu -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:50.156   10:07:45 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x
00:10:50.156  ************************************
00:10:50.156  START TEST vfio_user_virtio_scsi_restart_vm
00:10:50.156  ************************************
00:10:50.156   10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_restart_vm.sh virtio_scsi
00:10:50.415  * Looking for test storage...
00:10:50.415  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:10:50.415    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:10:50.415     10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1693 -- # lcov --version
00:10:50.416     10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:10:50.416    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:10:50.416    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:10:50.416    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@333 -- # local ver1 ver1_l
00:10:50.416    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@334 -- # local ver2 ver2_l
00:10:50.416    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@336 -- # IFS=.-:
00:10:50.416    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@336 -- # read -ra ver1
00:10:50.416    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@337 -- # IFS=.-:
00:10:50.416    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@337 -- # read -ra ver2
00:10:50.416    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@338 -- # local 'op=<'
00:10:50.416    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@340 -- # ver1_l=2
00:10:50.416    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@341 -- # ver2_l=1
00:10:50.416    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:10:50.416    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@344 -- # case "$op" in
00:10:50.416    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@345 -- # : 1
00:10:50.416    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@364 -- # (( v = 0 ))
00:10:50.416    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:10:50.416     10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@365 -- # decimal 1
00:10:50.416     10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@353 -- # local d=1
00:10:50.416     10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:50.416     10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@355 -- # echo 1
00:10:50.416    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@365 -- # ver1[v]=1
00:10:50.416     10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@366 -- # decimal 2
00:10:50.416     10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@353 -- # local d=2
00:10:50.416     10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:10:50.416     10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@355 -- # echo 2
00:10:50.416    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@366 -- # ver2[v]=2
00:10:50.416    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:10:50.416    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:10:50.416    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@368 -- # return 0
00:10:50.416    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:10:50.416    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:10:50.416  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:50.416  		--rc genhtml_branch_coverage=1
00:10:50.416  		--rc genhtml_function_coverage=1
00:10:50.416  		--rc genhtml_legend=1
00:10:50.416  		--rc geninfo_all_blocks=1
00:10:50.416  		--rc geninfo_unexecuted_blocks=1
00:10:50.416  		
00:10:50.416  		'
00:10:50.416    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:10:50.416  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:50.416  		--rc genhtml_branch_coverage=1
00:10:50.416  		--rc genhtml_function_coverage=1
00:10:50.416  		--rc genhtml_legend=1
00:10:50.416  		--rc geninfo_all_blocks=1
00:10:50.416  		--rc geninfo_unexecuted_blocks=1
00:10:50.416  		
00:10:50.416  		'
00:10:50.416    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:10:50.416  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:50.416  		--rc genhtml_branch_coverage=1
00:10:50.416  		--rc genhtml_function_coverage=1
00:10:50.416  		--rc genhtml_legend=1
00:10:50.416  		--rc geninfo_all_blocks=1
00:10:50.416  		--rc geninfo_unexecuted_blocks=1
00:10:50.416  		
00:10:50.416  		'
00:10:50.416    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:10:50.416  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:50.416  		--rc genhtml_branch_coverage=1
00:10:50.416  		--rc genhtml_function_coverage=1
00:10:50.416  		--rc genhtml_legend=1
00:10:50.416  		--rc geninfo_all_blocks=1
00:10:50.416  		--rc geninfo_unexecuted_blocks=1
00:10:50.416  		
00:10:50.416  		'
00:10:50.416   10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh
00:10:50.416    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/common.sh@6 -- # : 128
00:10:50.416    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/common.sh@7 -- # : 512
00:10:50.416    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:10:50.416     10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@6 -- # : false
00:10:50.416     10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@7 -- # : /root/vhost_test
00:10:50.416     10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@8 -- # : /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:10:50.416     10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@9 -- # : qemu-img
00:10:50.416      10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:10:50.416     10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:10:50.416     10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:10:50.416     10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:10:50.416     10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:10:50.416     10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:10:50.416     10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:10:50.416       10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_restart_vm.sh
00:10:50.416      10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:10:50.416     10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:10:50.416     10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:10:50.416     10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:10:50.416     10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:10:50.416     10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:10:50.416     10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:10:50.416      10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:10:50.416      10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@2 -- # vhost_0_main_core=0
00:10:50.416      10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:10:50.416      10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:10:50.416      10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:10:50.416      10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:10:50.416      10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:10:50.416      10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:10:50.416      10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:10:50.416      10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:10:50.416      10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:10:50.416      10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:10:50.416      10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:10:50.416      10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:10:50.416      10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:10:50.416      10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:10:50.416      10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:10:50.416      10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:10:50.416      10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:10:50.416      10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:10:50.416      10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:10:50.416      10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:10:50.416      10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:10:50.416      10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:10:50.416      10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:10:50.416      10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:10:50.417     10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:10:50.417      10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:10:50.417      10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:10:50.417      10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:10:50.417      10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:10:50.417      10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:10:50.417      10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:10:50.417       10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:10:50.417        10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/cgroups.sh@244 -- # check_cgroup
00:10:50.417        10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:10:50.417        10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:10:50.417        10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/cgroups.sh@10 -- # echo 2
00:10:50.417       10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:10:50.417    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:10:50.417    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]]
00:10:50.417    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:10:50.417   10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/common.sh
00:10:50.417   10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@12 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/autotest.config
00:10:50.417    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@1 -- # vhost_0_reactor_mask='[0-3]'
00:10:50.417    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@2 -- # vhost_0_main_core=0
00:10:50.417    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@4 -- # VM_0_qemu_mask=4-5
00:10:50.417    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:10:50.417    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@7 -- # VM_1_qemu_mask=6-7
00:10:50.417    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:10:50.417    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@10 -- # VM_2_qemu_mask=8-9
00:10:50.417    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:10:50.417   10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@14 -- # bdfs=($(get_nvme_bdfs))
00:10:50.417    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@14 -- # get_nvme_bdfs
00:10:50.417    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1498 -- # bdfs=()
00:10:50.417    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1498 -- # local bdfs
00:10:50.417    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:10:50.417     10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/gen_nvme.sh
00:10:50.417     10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:10:50.417    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:10:50.417    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:85:00.0
00:10:50.417    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@15 -- # get_vhost_dir 0
00:10:50.417    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:10:50.417    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:10:50.417    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:10:50.417   10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@15 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:10:50.417   10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@17 -- # virtio_type=virtio_scsi
00:10:50.417   10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@18 -- # [[ virtio_scsi != virtio_blk ]]
00:10:50.417   10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@18 -- # [[ virtio_scsi != virtio_scsi ]]
00:10:50.417   10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@31 -- # vhosttestinit
00:10:50.417   10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@37 -- # '[' '' == iso ']'
00:10:50.417   10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]]
00:10:50.417   10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:10:50.417   10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:10:50.417   10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@33 -- # vfu_tgt_run 0
00:10:50.417   10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@6 -- # local vhost_name=0
00:10:50.417   10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@7 -- # local vfio_user_dir vfu_pid_file rpc_py
00:10:50.417    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@9 -- # get_vhost_dir 0
00:10:50.417    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:10:50.417    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:10:50.417    10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:10:50.417   10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@9 -- # vfio_user_dir=/root/vhost_test/vhost/0
00:10:50.417   10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@10 -- # vfu_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:10:50.417   10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@11 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:10:50.417   10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@13 -- # mkdir -p /root/vhost_test/vhost/0
00:10:50.417   10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@15 -- # timing_enter vfu_tgt_start
00:10:50.417   10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@726 -- # xtrace_disable
00:10:50.417   10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:10:50.417   10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@17 -- # vfupid=1779536
00:10:50.417   10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@16 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -r /root/vhost_test/vhost/0/rpc.sock -m 0xf -s 512
00:10:50.417   10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@18 -- # echo 1779536
00:10:50.417   10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@20 -- # echo 'Process pid: 1779536'
00:10:50.417  Process pid: 1779536
00:10:50.417   10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@21 -- # echo 'waiting for app to run...'
00:10:50.417  waiting for app to run...
00:10:50.417   10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@22 -- # waitforlisten 1779536 /root/vhost_test/vhost/0/rpc.sock
00:10:50.417   10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@835 -- # '[' -z 1779536 ']'
00:10:50.417   10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@839 -- # local rpc_addr=/root/vhost_test/vhost/0/rpc.sock
00:10:50.417   10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:50.417   10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...'
00:10:50.417  Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...
00:10:50.417   10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:50.417   10:07:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:10:50.676  [2024-11-20 10:07:45.615222] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:10:50.676  [2024-11-20 10:07:45.615373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xf -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1779536 ]
00:10:50.676  EAL: No free 2048 kB hugepages reported on node 1
00:10:50.934  [2024-11-20 10:07:45.903741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:10:50.934  [2024-11-20 10:07:46.010379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:10:50.934  [2024-11-20 10:07:46.010440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:10:50.934  [2024-11-20 10:07:46.010478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:50.934  [2024-11-20 10:07:46.010489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:10:51.868   10:07:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:51.868   10:07:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@868 -- # return 0
00:10:51.868   10:07:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@24 -- # timing_exit vfu_tgt_start
00:10:51.868   10:07:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@732 -- # xtrace_disable
00:10:51.868   10:07:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:10:51.868   10:07:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@35 -- # vfu_vm_dir=/root/vhost_test/vms/vfu_tgt
00:10:51.868   10:07:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@36 -- # rm -rf /root/vhost_test/vms/vfu_tgt
00:10:51.868   10:07:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@37 -- # mkdir -p /root/vhost_test/vms/vfu_tgt
00:10:51.868   10:07:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@39 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_attach_controller -b Nvme0 -t pcie -a 0000:85:00.0
00:10:55.148  Nvme0n1
00:10:55.148   10:07:49 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@42 -- # disk_no=1
00:10:55.148   10:07:49 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@43 -- # vm_num=1
00:10:55.148   10:07:49 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@44 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_tgt_set_base_path /root/vhost_test/vms/vfu_tgt
00:10:55.406   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@46 -- # [[ virtio_scsi == \v\i\r\t\i\o\_\b\l\k ]]
00:10:55.406   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@48 -- # [[ virtio_scsi == \v\i\r\t\i\o\_\s\c\s\i ]]
00:10:55.406   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@49 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_virtio_create_scsi_endpoint virtio.1 --num-io-queues=2 --qsize=512 --packed-ring
00:10:55.663   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@50 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_virtio_scsi_add_target virtio.1 --scsi-target-num=0 --bdev-name Nvme0n1
00:10:55.923  [2024-11-20 10:07:50.911472] vfu_virtio_scsi.c: 886:vfu_virtio_scsi_add_target: *NOTICE*: virtio.1: added SCSI target 0 using bdev 'Nvme0n1'
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@53 -- # vm_setup --disk-type=vfio_user_virtio --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@518 -- # xtrace_disable
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:10:55.923  WARN: removing existing VM in '/root/vhost_test/vms/1'
00:10:55.923  INFO: Creating new VM in /root/vhost_test/vms/1
00:10:55.923  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:10:55.923  INFO: TASK MASK: 6-7
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@671 -- # local node_num=0
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@672 -- # local boot_disk_present=false
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:10:55.923  INFO: NUMA NODE: 0
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@677 -- # [[ -n '' ]]
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@686 -- # [[ -z '' ]]
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@701 -- # IFS=,
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@701 -- # read -r disk disk_type _
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@702 -- # [[ -z '' ]]
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@702 -- # disk_type=vfio_user_virtio
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@704 -- # case $disk_type in
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@766 -- # notice 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:10:55.923  INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@767 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/vfu_tgt/virtio.$disk")
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@768 -- # [[ 1 == '' ]]
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@780 -- # [[ -n '' ]]
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@785 -- # (( 0 ))
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:10:55.923  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@787 -- # cat
00:10:55.923    10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/vfu_tgt/virtio.1
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@827 -- # echo 10100
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@828 -- # echo 10101
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@829 -- # echo 10102
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@832 -- # [[ -z '' ]]
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@834 -- # echo 10104
00:10:55.923   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@835 -- # echo 101
00:10:55.924   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@837 -- # [[ -z '' ]]
00:10:55.924   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@838 -- # [[ -z '' ]]
00:10:55.924   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@54 -- # vm_run 1
00:10:55.924   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:10:55.924   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@843 -- # local run_all=false
00:10:55.924   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@844 -- # local vms_to_run=
00:10:55.924   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@846 -- # getopts a-: optchar
00:10:55.924   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@856 -- # false
00:10:55.924   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@859 -- # shift 0
00:10:55.924   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@860 -- # for vm in "$@"
00:10:55.924   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@861 -- # vm_num_is_valid 1
00:10:55.924   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:55.924   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:55.924   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:10:55.924   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:10:55.924   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:10:55.924   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@871 -- # vm_is_running 1
00:10:55.924   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:10:55.924   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:55.924   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:55.924   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:10:55.924   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:10:55.924   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@373 -- # return 1
00:10:55.924   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:10:55.924   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:10:55.924   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:55.924   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:10:55.924   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:55.924   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:55.924   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:10:55.924   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:10:55.924  INFO: running /root/vhost_test/vms/1/run.sh
00:10:55.924   10:07:50 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:10:55.924  Running VM in /root/vhost_test/vms/1
00:10:56.182  [2024-11-20 10:07:51.222902] tgt_endpoint.c: 167:tgt_accept_poller: *NOTICE*: /root/vhost_test/vms/vfu_tgt/virtio.1: attached successfully
00:10:56.182  Waiting for QEMU pid file
00:10:57.557  === qemu.log ===
00:10:57.557  === qemu.log ===
00:10:57.557   10:07:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@55 -- # vm_wait_for_boot 60 1
00:10:57.557   10:07:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@913 -- # assert_number 60
00:10:57.557   10:07:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:10:57.557   10:07:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@281 -- # return 0
00:10:57.557   10:07:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@915 -- # xtrace_disable
00:10:57.557   10:07:52 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:10:57.557  INFO: Waiting for VMs to boot
00:10:57.557  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:11:09.752  [2024-11-20 10:08:04.467384] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:11:17.856  
00:11:17.856  INFO: VM1 ready
00:11:17.856  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:11:17.856  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:11:18.791  INFO: all VMs ready
00:11:18.791   10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@973 -- # return 0
00:11:18.791   10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@58 -- # fio_bin=--fio-bin=/usr/src/fio-static/fio
00:11:18.791   10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@59 -- # fio_disks=
00:11:18.791   10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@60 -- # qemu_mask_param=VM_1_qemu_mask
00:11:18.791   10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@62 -- # host_name=VM-1-6-7
00:11:18.791   10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@63 -- # vm_exec 1 'hostname VM-1-6-7'
00:11:18.791   10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:11:18.791   10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:18.791   10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:18.791   10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:11:18.791   10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:11:18.791    10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:11:18.791    10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:11:18.791    10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:18.791    10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:18.791    10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:11:18.791    10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:11:18.791   10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'hostname VM-1-6-7'
00:11:18.791  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:11:18.791   10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@64 -- # vm_start_fio_server --fio-bin=/usr/src/fio-static/fio 1
00:11:18.791   10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@977 -- # local OPTIND optchar
00:11:18.791   10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@978 -- # local readonly=
00:11:18.791   10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@979 -- # local fio_bin=
00:11:18.791   10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@980 -- # getopts :-: optchar
00:11:18.791   10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@981 -- # case "$optchar" in
00:11:18.791   10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@983 -- # case "$OPTARG" in
00:11:18.791   10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@984 -- # local fio_bin=/usr/src/fio-static/fio
00:11:18.792   10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@980 -- # getopts :-: optchar
00:11:18.792   10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@993 -- # shift 1
00:11:18.792   10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@994 -- # for vm_num in "$@"
00:11:18.792   10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@995 -- # notice 'Starting fio server on VM1'
00:11:18.792   10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Starting fio server on VM1'
00:11:18.792   10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:18.792   10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:11:18.792   10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:18.792   10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:18.792   10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:11:18.792   10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Starting fio server on VM1'
00:11:18.792  INFO: Starting fio server on VM1
00:11:18.792   10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@996 -- # [[ /usr/src/fio-static/fio != '' ]]
00:11:18.792   10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@997 -- # vm_exec 1 'cat > /root/fio; chmod +x /root/fio'
00:11:18.792   10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:11:18.792   10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:18.792   10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:18.792   10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:11:18.792   10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:11:18.792    10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:11:18.792    10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:11:18.792    10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:18.792    10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:18.792    10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:11:18.792    10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:11:18.792   10:08:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/fio; chmod +x /root/fio'
00:11:18.792  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:11:19.050   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@998 -- # vm_exec 1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:11:19.050   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:11:19.050   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:19.050   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:19.050   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:11:19.050   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:11:19.050    10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:11:19.050    10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:11:19.050    10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:19.050    10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:19.050    10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:11:19.050    10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:11:19.050   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:11:19.050  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:11:19.309   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@66 -- # disks_before_restart=
00:11:19.309   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@67 -- # get_disks virtio_scsi 1
00:11:19.309   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@24 -- # [[ virtio_scsi == \v\i\r\t\i\o\_\s\c\s\i ]]
00:11:19.309   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@25 -- # vm_check_scsi_location 1
00:11:19.309   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1014 -- # local 'script=shopt -s nullglob;
00:11:19.309  	for entry in /sys/block/sd*; do
00:11:19.309  		disk_type="$(cat $entry/device/vendor)";
00:11:19.309  		if [[ $disk_type == INTEL* ]] || [[ $disk_type == RAWSCSI* ]] || [[ $disk_type == LIO-ORG* ]]; then
00:11:19.309  			fname=$(basename $entry);
00:11:19.309  			echo -n " $fname";
00:11:19.309  		fi;
00:11:19.309  	done'
00:11:19.309    10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1016 -- # echo 'shopt -s nullglob;
00:11:19.309  	for entry in /sys/block/sd*; do
00:11:19.309  		disk_type="$(cat $entry/device/vendor)";
00:11:19.309  		if [[ $disk_type == INTEL* ]] || [[ $disk_type == RAWSCSI* ]] || [[ $disk_type == LIO-ORG* ]]; then
00:11:19.309  			fname=$(basename $entry);
00:11:19.309  			echo -n " $fname";
00:11:19.309  		fi;
00:11:19.309  	done'
00:11:19.309    10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1016 -- # vm_exec 1 bash -s
00:11:19.309    10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:11:19.309    10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:19.310    10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:19.310    10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:11:19.310    10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:11:19.310     10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:11:19.310     10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:11:19.310     10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:19.310     10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:19.310     10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:11:19.310     10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:11:19.310    10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 bash -s
00:11:19.310  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1016 -- # SCSI_DISK=' sdb'
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1018 -- # [[ -z  sdb ]]
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@68 -- # disks_before_restart=' sdb'
00:11:19.310    10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@70 -- # printf :/dev/%s sdb
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@70 -- # fio_disks=' --vm=1:/dev/sdb'
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@71 -- # job_file=default_integrity.job
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@74 -- # run_fio --fio-bin=/usr/src/fio-static/fio --job-file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job --out=/root/vhost_test/fio_results --vm=1:/dev/sdb
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1053 -- # local arg
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1054 -- # local job_file=
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1055 -- # local fio_bin=
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1056 -- # vms=()
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1056 -- # local vms
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1057 -- # local out=
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1058 -- # local vm
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1059 -- # local run_server_mode=true
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1060 -- # local run_plugin_mode=false
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1061 -- # local fio_start_cmd
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1062 -- # local fio_output_format=normal
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1063 -- # local fio_gtod_reduce=false
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1064 -- # local wait_for_fio=true
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1069 -- # local fio_bin=/usr/src/fio-static/fio
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1068 -- # local job_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1072 -- # local out=/root/vhost_test/fio_results
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1073 -- # mkdir -p /root/vhost_test/fio_results
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1070 -- # vms+=("${arg#*=}")
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1092 -- # [[ -n /usr/src/fio-static/fio ]]
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1092 -- # [[ ! -r /usr/src/fio-static/fio ]]
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1097 -- # [[ -z /usr/src/fio-static/fio ]]
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1101 -- # [[ ! -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job ]]
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1106 -- # fio_start_cmd='/usr/src/fio-static/fio --eta=never '
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1108 -- # local job_fname
00:11:19.310    10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1109 -- # basename /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1109 -- # job_fname=default_integrity.job
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1110 -- # log_fname=default_integrity.log
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1111 -- # fio_start_cmd+=' --output=/root/vhost_test/fio_results/default_integrity.log --output-format=normal '
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1114 -- # for vm in "${vms[@]}"
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1115 -- # local vm_num=1
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1116 -- # local vmdisks=/dev/sdb
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1118 -- # sed 's@filename=@filename=/dev/sdb@;s@description=\(.*\)@description=\1 (VM=1)@' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1119 -- # vm_exec 1 'cat > /root/default_integrity.job'
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:11:19.310    10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:11:19.310    10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:11:19.310    10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:19.310    10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:19.310    10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:11:19.310    10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:11:19.310   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/default_integrity.job'
00:11:19.310  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:11:19.569   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1121 -- # false
00:11:19.569   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1125 -- # vm_exec 1 cat /root/default_integrity.job
00:11:19.569   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:11:19.569   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:19.569   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:19.569   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:11:19.569   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:11:19.569    10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:11:19.569    10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:11:19.569    10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:19.569    10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:19.569    10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:11:19.569    10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:11:19.569   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 cat /root/default_integrity.job
00:11:19.569  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:11:19.827  [global]
00:11:19.827  blocksize_range=4k-512k
00:11:19.827  iodepth=512
00:11:19.827  iodepth_batch=128
00:11:19.827  iodepth_low=256
00:11:19.827  ioengine=libaio
00:11:19.827  size=1G
00:11:19.827  io_size=4G
00:11:19.827  filename=/dev/sdb
00:11:19.827  group_reporting
00:11:19.827  thread
00:11:19.827  numjobs=1
00:11:19.827  direct=1
00:11:19.827  rw=randwrite
00:11:19.827  do_verify=1
00:11:19.827  verify=md5
00:11:19.827  verify_backlog=1024
00:11:19.827  fsync_on_close=1
00:11:19.827  verify_state_save=0
00:11:19.827  [nvme-host]
00:11:19.827   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1127 -- # true
00:11:19.827    10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1128 -- # vm_fio_socket 1
00:11:19.827    10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@326 -- # vm_num_is_valid 1
00:11:19.827    10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:19.827    10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:19.827    10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@327 -- # local vm_dir=/root/vhost_test/vms/1
00:11:19.827    10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@329 -- # cat /root/vhost_test/vms/1/fio_socket
00:11:19.827   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1128 -- # fio_start_cmd+='--client=127.0.0.1,10101 --remote-config /root/default_integrity.job '
00:11:19.827   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1131 -- # true
00:11:19.827   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1147 -- # true
00:11:19.827   10:08:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1161 -- # /usr/src/fio-static/fio --eta=never --output=/root/vhost_test/fio_results/default_integrity.log --output-format=normal --client=127.0.0.1,10101 --remote-config /root/default_integrity.job
00:11:20.760  [2024-11-20 10:08:15.773701] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:11:27.385  [2024-11-20 10:08:21.642944] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:11:27.385  [2024-11-20 10:08:21.660650] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:11:27.385  [2024-11-20 10:08:22.017916] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:11:32.653  [2024-11-20 10:08:27.601356] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:11:32.653  [2024-11-20 10:08:27.717317] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:11:32.912  [2024-11-20 10:08:28.023139] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:11:33.170   10:08:28 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1162 -- # sleep 1
00:11:34.103   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1164 -- # [[ normal == \j\s\o\n ]]
00:11:34.103   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1172 -- # [[ ! -n '' ]]
00:11:34.103   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1173 -- # cat /root/vhost_test/fio_results/default_integrity.log
00:11:34.103  hostname=VM-1-6-7, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-3.35, flags=1
00:11:34.103  <VM-1-6-7> nvme-host: (g=0): rw=randwrite, bs=(R) 4096B-512KiB, (W) 4096B-512KiB, (T) 4096B-512KiB, ioengine=libaio, iodepth=512
00:11:34.103  <VM-1-6-7> Starting 1 thread
00:11:34.103  <VM-1-6-7> 
00:11:34.103  nvme-host: (groupid=0, jobs=1): err= 0: pid=952: Wed Nov 20 10:08:28 2024
00:11:34.103    read: IOPS=1018, BW=171MiB/s (179MB/s)(2048MiB/11984msec)
00:11:34.103      slat (usec): min=64, max=236306, avg=25203.95, stdev=52286.29
00:11:34.103      clat (msec): min=6, max=473, avg=198.89, stdev=87.28
00:11:34.103       lat (msec): min=7, max=519, avg=224.09, stdev=96.16
00:11:34.103      clat percentiles (msec):
00:11:34.103       |  1.00th=[   11],  5.00th=[   56], 10.00th=[   87], 20.00th=[  123],
00:11:34.103       | 30.00th=[  153], 40.00th=[  176], 50.00th=[  194], 60.00th=[  218],
00:11:34.103       | 70.00th=[  243], 80.00th=[  284], 90.00th=[  321], 95.00th=[  342],
00:11:34.103       | 99.00th=[  376], 99.50th=[  380], 99.90th=[  468], 99.95th=[  472],
00:11:34.103       | 99.99th=[  472]
00:11:34.103    write: IOPS=1092, BW=183MiB/s (192MB/s)(2048MiB/11175msec); 0 zone resets
00:11:34.103      slat (usec): min=411, max=87267, avg=22546.07, stdev=15313.60
00:11:34.103      clat (msec): min=7, max=347, avg=134.88, stdev=67.29
00:11:34.103       lat (msec): min=8, max=353, avg=157.42, stdev=71.11
00:11:34.103      clat percentiles (msec):
00:11:34.103       |  1.00th=[    9],  5.00th=[   32], 10.00th=[   48], 20.00th=[   81],
00:11:34.103       | 30.00th=[   95], 40.00th=[  113], 50.00th=[  129], 60.00th=[  148],
00:11:34.103       | 70.00th=[  167], 80.00th=[  197], 90.00th=[  226], 95.00th=[  255],
00:11:34.103       | 99.00th=[  284], 99.50th=[  321], 99.90th=[  338], 99.95th=[  347],
00:11:34.103       | 99.99th=[  347]
00:11:34.103     bw (  KiB/s): min= 4232, max=361688, per=98.85%, avg=185502.00, stdev=93140.13, samples=22
00:11:34.103     iops        : min=   34, max= 2014, avg=1066.77, stdev=528.97, samples=22
00:11:34.103    lat (msec)   : 10=0.95%, 20=1.91%, 50=4.94%, 100=15.38%, 250=60.39%
00:11:34.103    lat (msec)   : 500=16.42%
00:11:34.103    cpu          : usr=72.58%, sys=2.24%, ctx=1620, majf=0, minf=34
00:11:34.103    IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.5%, >=64=99.1%
00:11:34.103       submit    : 0=0.0%, 4=0.0%, 8=1.2%, 16=0.0%, 32=0.0%, 64=19.2%, >=64=79.6%
00:11:34.103       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:11:34.103       issued rwts: total=12208,12208,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:34.103       latency   : target=0, window=0, percentile=100.00%, depth=512
00:11:34.103  
00:11:34.103  Run status group 0 (all jobs):
00:11:34.103     READ: bw=171MiB/s (179MB/s), 171MiB/s-171MiB/s (179MB/s-179MB/s), io=2048MiB (2147MB), run=11984-11984msec
00:11:34.103    WRITE: bw=183MiB/s (192MB/s), 183MiB/s-183MiB/s (192MB/s-192MB/s), io=2048MiB (2147MB), run=11175-11175msec
00:11:34.103  
00:11:34.104  Disk stats (read/write):
00:11:34.104    sdb: ios=11872/12122, merge=54/86, ticks=756910/131641, in_queue=888552, util=54.67%
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@77 -- # notice 'Shutting down virtual machine...'
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine...'
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine...'
00:11:34.104  INFO: Shutting down virtual machine...
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@78 -- # vm_shutdown_all
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:11:34.104    10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@489 -- # vm_list_all
00:11:34.104    10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@466 -- # vms=()
00:11:34.104    10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@466 -- # local vms
00:11:34.104    10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:11:34.104    10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:11:34.104    10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@492 -- # vm_shutdown 1
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@424 -- # vm_is_running 1
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:11:34.104    10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # vm_pid=1780260
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 1780260
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@380 -- # return 0
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:11:34.104  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@432 -- # set +e
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:11:34.104    10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:11:34.104    10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:11:34.104    10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:34.104    10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:34.104    10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:11:34.104    10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:11:34.104   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:11:34.104  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:11:34.362   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:11:34.362   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:11:34.362   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:34.362   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:11:34.362   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:34.362   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:34.362   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:11:34.362   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:11:34.362  INFO: VM1 is shutting down - wait a while to complete
00:11:34.362   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@435 -- # set -e
00:11:34.362   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:11:34.362   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:11:34.362   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:34.362   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:11:34.362   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:34.362   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:34.362   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:11:34.362   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:11:34.362  INFO: Waiting for VMs to shutdown...
00:11:34.362   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:11:34.362   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:11:34.362   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:11:34.362   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:11:34.362   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:34.362   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:34.362   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:11:34.362   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:11:34.362   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:11:34.362    10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:11:34.362   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # vm_pid=1780260
00:11:34.362   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 1780260
00:11:34.362   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@380 -- # return 0
00:11:34.362   10:08:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:11:35.297   10:08:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:11:35.297   10:08:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:11:35.297   10:08:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:11:35.297   10:08:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:11:35.297   10:08:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:35.297   10:08:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:35.297   10:08:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:11:35.297   10:08:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:11:35.297   10:08:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:11:35.297    10:08:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:11:35.297   10:08:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # vm_pid=1780260
00:11:35.297   10:08:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 1780260
00:11:35.297   10:08:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@380 -- # return 0
00:11:35.297   10:08:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:11:36.230   10:08:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:11:36.230   10:08:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:11:36.230   10:08:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:11:36.230   10:08:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:11:36.230   10:08:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:36.230   10:08:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:36.230   10:08:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:11:36.230   10:08:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:11:36.230   10:08:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@373 -- # return 1
00:11:36.230   10:08:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:11:36.230   10:08:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:11:37.163   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:11:37.163   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:11:37.163   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:11:37.422   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:11:37.422   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:37.422   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:11:37.422   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:37.422   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:37.422   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:11:37.422   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:11:37.422  INFO: All VMs successfully shut down
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@505 -- # return 0
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@81 -- # vm_setup --disk-type=vfio_user_virtio --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@518 -- # xtrace_disable
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:11:37.423  WARN: removing existing VM in '/root/vhost_test/vms/1'
00:11:37.423  INFO: Creating new VM in /root/vhost_test/vms/1
00:11:37.423  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:11:37.423  INFO: TASK MASK: 6-7
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@671 -- # local node_num=0
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@672 -- # local boot_disk_present=false
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:11:37.423  INFO: NUMA NODE: 0
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@677 -- # [[ -n '' ]]
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@686 -- # [[ -z '' ]]
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@701 -- # IFS=,
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@701 -- # read -r disk disk_type _
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@702 -- # [[ -z '' ]]
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@702 -- # disk_type=vfio_user_virtio
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@704 -- # case $disk_type in
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@766 -- # notice 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:11:37.423  INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@767 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/vfu_tgt/virtio.$disk")
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@768 -- # [[ 1 == '' ]]
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@780 -- # [[ -n '' ]]
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@785 -- # (( 0 ))
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:11:37.423  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@787 -- # cat
00:11:37.423    10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/vfu_tgt/virtio.1
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@827 -- # echo 10100
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@828 -- # echo 10101
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@829 -- # echo 10102
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@832 -- # [[ -z '' ]]
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@834 -- # echo 10104
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@835 -- # echo 101
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@837 -- # [[ -z '' ]]
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@838 -- # [[ -z '' ]]
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@82 -- # vm_run 1
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@843 -- # local run_all=false
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@844 -- # local vms_to_run=
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@846 -- # getopts a-: optchar
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@856 -- # false
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@859 -- # shift 0
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@860 -- # for vm in "$@"
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@861 -- # vm_num_is_valid 1
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@871 -- # vm_is_running 1
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@373 -- # return 1
00:11:37.423   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:11:37.424   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:11:37.424   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:37.424   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:11:37.424   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:37.424   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:37.424   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:11:37.424   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:11:37.424  INFO: running /root/vhost_test/vms/1/run.sh
00:11:37.424   10:08:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:11:37.424  Running VM in /root/vhost_test/vms/1
00:11:37.682  [2024-11-20 10:08:32.759198] tgt_endpoint.c: 167:tgt_accept_poller: *NOTICE*: /root/vhost_test/vms/vfu_tgt/virtio.1: attached successfully
00:11:37.940  Waiting for QEMU pid file
00:11:38.873  === qemu.log ===
00:11:38.873  === qemu.log ===
00:11:38.873   10:08:33 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@83 -- # vm_wait_for_boot 60 1
00:11:38.873   10:08:33 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@913 -- # assert_number 60
00:11:38.873   10:08:33 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:11:38.873   10:08:33 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@281 -- # return 0
00:11:38.873   10:08:33 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@915 -- # xtrace_disable
00:11:38.873   10:08:33 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:11:38.873  INFO: Waiting for VMs to boot
00:11:38.873  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:11:51.071  [2024-11-20 10:08:45.961280] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:12:01.033  
00:12:01.033  INFO: VM1 ready
00:12:01.033  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:12:01.033  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:12:01.291  INFO: all VMs ready
00:12:01.291   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@973 -- # return 0
00:12:01.291   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@86 -- # disks_after_restart=
00:12:01.291   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@87 -- # get_disks virtio_scsi 1
00:12:01.291   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@24 -- # [[ virtio_scsi == \v\i\r\t\i\o\_\s\c\s\i ]]
00:12:01.291   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@25 -- # vm_check_scsi_location 1
00:12:01.291   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1014 -- # local 'script=shopt -s nullglob;
00:12:01.291  	for entry in /sys/block/sd*; do
00:12:01.291  		disk_type="$(cat $entry/device/vendor)";
00:12:01.291  		if [[ $disk_type == INTEL* ]] || [[ $disk_type == RAWSCSI* ]] || [[ $disk_type == LIO-ORG* ]]; then
00:12:01.291  			fname=$(basename $entry);
00:12:01.291  			echo -n " $fname";
00:12:01.291  		fi;
00:12:01.291  	done'
00:12:01.291    10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1016 -- # echo 'shopt -s nullglob;
00:12:01.291  	for entry in /sys/block/sd*; do
00:12:01.291  		disk_type="$(cat $entry/device/vendor)";
00:12:01.291  		if [[ $disk_type == INTEL* ]] || [[ $disk_type == RAWSCSI* ]] || [[ $disk_type == LIO-ORG* ]]; then
00:12:01.291  			fname=$(basename $entry);
00:12:01.291  			echo -n " $fname";
00:12:01.291  		fi;
00:12:01.291  	done'
00:12:01.291    10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1016 -- # vm_exec 1 bash -s
00:12:01.291    10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:12:01.291    10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:01.291    10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:01.291    10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:12:01.291    10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:12:01.291     10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:12:01.291     10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:12:01.291     10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:01.291     10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:01.291     10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:12:01.291     10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:12:01.291    10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 bash -s
00:12:01.291  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:12:01.291   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1016 -- # SCSI_DISK=' sdb'
00:12:01.291   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1018 -- # [[ -z  sdb ]]
00:12:01.291   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@88 -- # disks_after_restart=' sdb'
00:12:01.291   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@90 -- # [[  sdb != \ \s\d\b ]]
00:12:01.291   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@96 -- # notice 'Shutting down virtual machine...'
00:12:01.291   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine...'
00:12:01.291   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:01.291   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:12:01.291   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:01.291   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:01.291   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:12:01.291   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine...'
00:12:01.291  INFO: Shutting down virtual machine...
00:12:01.291   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@97 -- # vm_shutdown_all
00:12:01.292   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:12:01.292   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:12:01.292    10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@489 -- # vm_list_all
00:12:01.292    10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@466 -- # vms=()
00:12:01.292    10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@466 -- # local vms
00:12:01.292    10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:12:01.292    10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:12:01.292    10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:12:01.292   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:12:01.292   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@492 -- # vm_shutdown 1
00:12:01.292   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:12:01.292   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:01.292   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:01.292   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:12:01.292   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:12:01.292   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@424 -- # vm_is_running 1
00:12:01.292   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:12:01.292   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:01.292   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:01.292   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:12:01.292   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:12:01.292   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:12:01.292    10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:12:01.292   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # vm_pid=1785183
00:12:01.292   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 1785183
00:12:01.292   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@380 -- # return 0
00:12:01.292   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:12:01.292   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:12:01.292   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:01.292   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:12:01.292   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:01.292   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:01.292   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:12:01.292   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:12:01.292  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:12:01.292   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@432 -- # set +e
00:12:01.292   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:12:01.292   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:12:01.292   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:01.292   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:01.292   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:12:01.292   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:12:01.292    10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:12:01.292    10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:12:01.292    10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:01.292    10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:01.292    10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:12:01.292    10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:12:01.292   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:12:01.292  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:12:01.550   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:12:01.550   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:12:01.550   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:01.550   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:12:01.550   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:01.550   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:01.550   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:12:01.550   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:12:01.550  INFO: VM1 is shutting down - wait a while to complete
00:12:01.550   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@435 -- # set -e
00:12:01.550   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:12:01.550   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:12:01.550   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:01.550   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:12:01.550   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:01.550   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:01.550   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:12:01.550   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:12:01.550  INFO: Waiting for VMs to shutdown...
00:12:01.550   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:12:01.551   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:12:01.551   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:12:01.551   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:12:01.551   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:01.551   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:01.551   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:12:01.551   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:12:01.551   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:12:01.551    10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:12:01.551   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # vm_pid=1785183
00:12:01.551   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 1785183
00:12:01.551   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@380 -- # return 0
00:12:01.551   10:08:56 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:12:02.483   10:08:57 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:12:02.483   10:08:57 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:12:02.483   10:08:57 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:12:02.483   10:08:57 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:12:02.483   10:08:57 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:02.483   10:08:57 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:02.483   10:08:57 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:12:02.483   10:08:57 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:12:02.483   10:08:57 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:12:02.483    10:08:57 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:12:02.483   10:08:57 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # vm_pid=1785183
00:12:02.483   10:08:57 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 1785183
00:12:02.483   10:08:57 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@380 -- # return 0
00:12:02.483   10:08:57 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:12:03.853   10:08:58 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:12:03.853   10:08:58 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:12:03.853   10:08:58 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:12:03.853   10:08:58 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:12:03.853   10:08:58 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:03.853   10:08:58 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:03.853   10:08:58 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:12:03.853   10:08:58 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:12:03.853   10:08:58 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@373 -- # return 1
00:12:03.853   10:08:58 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:12:03.854   10:08:58 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:12:04.785   10:08:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:12:04.785   10:08:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:12:04.785   10:08:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:12:04.785   10:08:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:12:04.785   10:08:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:04.785   10:08:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:12:04.785   10:08:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:04.785   10:08:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:04.785   10:08:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:12:04.785   10:08:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:12:04.785  INFO: All VMs successfully shut down
00:12:04.785   10:08:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@505 -- # return 0
00:12:04.785   10:08:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@99 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_detach_controller Nvme0
00:12:04.786  [2024-11-20 10:08:59.810752] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (Nvme0n1) received event(SPDK_BDEV_EVENT_REMOVE)
00:12:06.156   10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@101 -- # vhost_kill 0
00:12:06.156   10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@202 -- # local rc=0
00:12:06.156   10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@203 -- # local vhost_name=0
00:12:06.156   10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@205 -- # [[ -z 0 ]]
00:12:06.156   10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@210 -- # local vhost_dir
00:12:06.156    10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@211 -- # get_vhost_dir 0
00:12:06.156    10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:12:06.156    10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:12:06.156    10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:12:06.156   10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0
00:12:06.156   10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:12:06.156   10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]]
00:12:06.156   10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@219 -- # timing_enter vhost_kill
00:12:06.156   10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@726 -- # xtrace_disable
00:12:06.156   10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:12:06.156   10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@220 -- # local vhost_pid
00:12:06.156    10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@221 -- # cat /root/vhost_test/vhost/0/vhost.pid
00:12:06.156   10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@221 -- # vhost_pid=1779536
00:12:06.156   10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@222 -- # notice 'killing vhost (PID 1779536) app'
00:12:06.156   10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'killing vhost (PID 1779536) app'
00:12:06.156   10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:06.156   10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:12:06.156   10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:06.156   10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:06.156   10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:12:06.156   10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: killing vhost (PID 1779536) app'
00:12:06.156  INFO: killing vhost (PID 1779536) app
00:12:06.156   10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@224 -- # kill -INT 1779536
00:12:06.156   10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@225 -- # notice 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:12:06.156   10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:12:06.156   10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:06.156   10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:12:06.156   10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:06.156   10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:06.156   10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:12:06.156   10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: sent SIGINT to vhost app - waiting 60 seconds to exit'
00:12:06.156  INFO: sent SIGINT to vhost app - waiting 60 seconds to exit
00:12:06.156   10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@226 -- # (( i = 0 ))
00:12:06.156   10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:12:06.156   10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@227 -- # kill -0 1779536
00:12:06.156   10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@228 -- # echo .
00:12:06.156  .
00:12:06.156   10:09:01 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@229 -- # sleep 1
00:12:07.092   10:09:02 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@226 -- # (( i++ ))
00:12:07.092   10:09:02 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:12:07.092   10:09:02 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@227 -- # kill -0 1779536
00:12:07.092   10:09:02 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@228 -- # echo .
00:12:07.092  .
00:12:07.092   10:09:02 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@229 -- # sleep 1
00:12:08.469   10:09:03 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@226 -- # (( i++ ))
00:12:08.469   10:09:03 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:12:08.469   10:09:03 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@227 -- # kill -0 1779536
00:12:08.469  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 227: kill: (1779536) - No such process
00:12:08.469   10:09:03 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@231 -- # break
00:12:08.469   10:09:03 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@234 -- # kill -0 1779536
00:12:08.469  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 234: kill: (1779536) - No such process
00:12:08.469   10:09:03 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@239 -- # kill -0 1779536
00:12:08.469  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 239: kill: (1779536) - No such process
00:12:08.469   10:09:03 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@245 -- # is_pid_child 1779536
00:12:08.469   10:09:03 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1668 -- # local pid=1779536 _pid
00:12:08.469   10:09:03 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1670 -- # read -r _pid
00:12:08.469    10:09:03 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1667 -- # jobs -pr
00:12:08.469   10:09:03 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1671 -- # (( pid == _pid ))
00:12:08.469   10:09:03 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1670 -- # read -r _pid
00:12:08.469   10:09:03 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1674 -- # return 1
00:12:08.469   10:09:03 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@257 -- # timing_exit vhost_kill
00:12:08.469   10:09:03 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@732 -- # xtrace_disable
00:12:08.469   10:09:03 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:12:08.469   10:09:03 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@259 -- # rm -rf /root/vhost_test/vhost/0
00:12:08.469   10:09:03 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@261 -- # return 0
00:12:08.469   10:09:03 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@103 -- # vhosttestfini
00:12:08.469   10:09:03 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@54 -- # '[' '' == iso ']'
00:12:08.469  
00:12:08.469  real	1m17.972s
00:12:08.469  user	5m5.242s
00:12:08.469  sys	0m2.249s
00:12:08.469   10:09:03 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:08.469   10:09:03 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:12:08.469  ************************************
00:12:08.469  END TEST vfio_user_virtio_scsi_restart_vm
00:12:08.469  ************************************
00:12:08.469   10:09:03 vfio_user_qemu -- vfio_user/vfio_user.sh@19 -- # run_test vfio_user_virtio_bdevperf /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/initiator_bdevperf.sh
00:12:08.469   10:09:03 vfio_user_qemu -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:08.469   10:09:03 vfio_user_qemu -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:08.469   10:09:03 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x
00:12:08.469  ************************************
00:12:08.469  START TEST vfio_user_virtio_bdevperf
00:12:08.469  ************************************
00:12:08.469   10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/initiator_bdevperf.sh
00:12:08.469  * Looking for test storage...
00:12:08.469  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:12:08.469    10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:12:08.469     10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version
00:12:08.469     10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:12:08.469    10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:12:08.469    10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:08.469    10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:08.469    10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:08.469    10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@336 -- # IFS=.-:
00:12:08.469    10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@336 -- # read -ra ver1
00:12:08.469    10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@337 -- # IFS=.-:
00:12:08.469    10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@337 -- # read -ra ver2
00:12:08.469    10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@338 -- # local 'op=<'
00:12:08.469    10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@340 -- # ver1_l=2
00:12:08.469    10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@341 -- # ver2_l=1
00:12:08.469    10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:08.469    10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@344 -- # case "$op" in
00:12:08.469    10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@345 -- # : 1
00:12:08.469    10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:08.469    10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:08.469     10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@365 -- # decimal 1
00:12:08.469     10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@353 -- # local d=1
00:12:08.469     10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:08.469     10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@355 -- # echo 1
00:12:08.469    10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1
00:12:08.469     10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@366 -- # decimal 2
00:12:08.469     10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@353 -- # local d=2
00:12:08.469     10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:08.469     10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@355 -- # echo 2
00:12:08.469    10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2
00:12:08.469    10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:08.469    10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:08.469    10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@368 -- # return 0
00:12:08.469    10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:08.469    10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:12:08.469  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:08.469  		--rc genhtml_branch_coverage=1
00:12:08.469  		--rc genhtml_function_coverage=1
00:12:08.469  		--rc genhtml_legend=1
00:12:08.469  		--rc geninfo_all_blocks=1
00:12:08.469  		--rc geninfo_unexecuted_blocks=1
00:12:08.469  		
00:12:08.469  		'
00:12:08.469    10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:12:08.469  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:08.469  		--rc genhtml_branch_coverage=1
00:12:08.469  		--rc genhtml_function_coverage=1
00:12:08.469  		--rc genhtml_legend=1
00:12:08.469  		--rc geninfo_all_blocks=1
00:12:08.469  		--rc geninfo_unexecuted_blocks=1
00:12:08.469  		
00:12:08.470  		'
00:12:08.470    10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:12:08.470  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:08.470  		--rc genhtml_branch_coverage=1
00:12:08.470  		--rc genhtml_function_coverage=1
00:12:08.470  		--rc genhtml_legend=1
00:12:08.470  		--rc geninfo_all_blocks=1
00:12:08.470  		--rc geninfo_unexecuted_blocks=1
00:12:08.470  		
00:12:08.470  		'
00:12:08.470    10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:12:08.470  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:08.470  		--rc genhtml_branch_coverage=1
00:12:08.470  		--rc genhtml_function_coverage=1
00:12:08.470  		--rc genhtml_legend=1
00:12:08.470  		--rc geninfo_all_blocks=1
00:12:08.470  		--rc geninfo_unexecuted_blocks=1
00:12:08.470  		
00:12:08.470  		'
00:12:08.470   10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@9 -- # rpc_py=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:12:08.470   10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@11 -- # vfu_dir=/tmp/vfu_devices
00:12:08.470   10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@12 -- # rm -rf /tmp/vfu_devices
00:12:08.470   10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@13 -- # mkdir -p /tmp/vfu_devices
00:12:08.470   10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@17 -- # spdk_tgt_pid=1789009
00:12:08.470   10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@16 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0xf -L vfu_virtio
00:12:08.470   10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@18 -- # waitforlisten 1789009
00:12:08.470   10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1789009 ']'
00:12:08.470   10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:08.470   10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100
00:12:08.470   10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:08.470  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:08.470   10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable
00:12:08.470   10:09:03 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:12:08.470  [2024-11-20 10:09:03.544139] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:12:08.470  [2024-11-20 10:09:03.544279] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1789009 ]
00:12:08.728  EAL: No free 2048 kB hugepages reported on node 1
00:12:08.728  [2024-11-20 10:09:03.678225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:12:08.728  [2024-11-20 10:09:03.799690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:12:08.728  [2024-11-20 10:09:03.799730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:12:08.728  [2024-11-20 10:09:03.799777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:12:08.728  [2024-11-20 10:09:03.799771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:09.661   10:09:04 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:12:09.661   10:09:04 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@868 -- # return 0
00:12:09.661   10:09:04 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create -b malloc0 64 512
00:12:10.227  malloc0
00:12:10.227   10:09:05 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create -b malloc1 64 512
00:12:10.484  malloc1
00:12:10.484   10:09:05 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@22 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create -b malloc2 64 512
00:12:10.742  malloc2
00:12:10.742   10:09:05 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@24 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_tgt_set_base_path /tmp/vfu_devices
00:12:11.000   10:09:06 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@27 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_virtio_create_blk_endpoint vfu.blk --bdev-name malloc0 --cpumask=0x1 --num-queues=2 --qsize=256 --packed-ring
00:12:11.257  [2024-11-20 10:09:06.292762] vfu_virtio.c:1533:vfu_virtio_endpoint_setup: *DEBUG*: mmap file /tmp/vfu_devices/vfu.blk_bar4, devmem_fd 466
00:12:11.257  [2024-11-20 10:09:06.292824] vfu_virtio.c:1695:vfu_virtio_get_device_info: *DEBUG*: /tmp/vfu_devices/vfu.blk: get device information, fd 466
00:12:11.257  [2024-11-20 10:09:06.293016] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.blk: get vendor capability, idx 0
00:12:11.257  [2024-11-20 10:09:06.293057] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.blk: get vendor capability, idx 1
00:12:11.257  [2024-11-20 10:09:06.293074] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.blk: get vendor capability, idx 2
00:12:11.257  [2024-11-20 10:09:06.293090] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.blk: get vendor capability, idx 3
00:12:11.257   10:09:06 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_virtio_create_scsi_endpoint vfu.scsi --cpumask 0x2 --num-io-queues=2 --qsize=256 --packed-ring
00:12:11.514  [2024-11-20 10:09:06.569889] vfu_virtio.c:1533:vfu_virtio_endpoint_setup: *DEBUG*: mmap file /tmp/vfu_devices/vfu.scsi_bar4, devmem_fd 570
00:12:11.514  [2024-11-20 10:09:06.569936] vfu_virtio.c:1695:vfu_virtio_get_device_info: *DEBUG*: /tmp/vfu_devices/vfu.scsi: get device information, fd 570
00:12:11.514  [2024-11-20 10:09:06.570022] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.scsi: get vendor capability, idx 0
00:12:11.515  [2024-11-20 10:09:06.570048] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.scsi: get vendor capability, idx 1
00:12:11.515  [2024-11-20 10:09:06.570062] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.scsi: get vendor capability, idx 2
00:12:11.515  [2024-11-20 10:09:06.570080] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.scsi: get vendor capability, idx 3
00:12:11.515   10:09:06 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@33 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_virtio_scsi_add_target vfu.scsi --scsi-target-num=0 --bdev-name malloc1
00:12:11.772  [2024-11-20 10:09:06.859139] vfu_virtio_scsi.c: 886:vfu_virtio_scsi_add_target: *NOTICE*: vfu.scsi: added SCSI target 0 using bdev 'malloc1'
00:12:11.772   10:09:06 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_virtio_scsi_add_target vfu.scsi --scsi-target-num=1 --bdev-name malloc2
00:12:12.028  [2024-11-20 10:09:07.136295] vfu_virtio_scsi.c: 886:vfu_virtio_scsi_add_target: *NOTICE*: vfu.scsi: added SCSI target 1 using bdev 'malloc2'
00:12:12.286   10:09:07 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@37 -- # bdevperf=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/examples/bdevperf
00:12:12.286   10:09:07 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@38 -- # bdevperf_rpc_sock=/tmp/bdevperf.sock
00:12:12.286   10:09:07 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@41 -- # bdevperf_pid=1789533
00:12:12.286   10:09:07 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@40 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/examples/bdevperf -r /tmp/bdevperf.sock -g -s 2048 -q 256 -o 4096 -w randrw -M 50 -t 30 -m 0xf0 -L vfio_pci -L virtio_vfio_user
00:12:12.286   10:09:07 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@42 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT
00:12:12.286   10:09:07 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@43 -- # waitforlisten 1789533 /tmp/bdevperf.sock
00:12:12.286   10:09:07 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1789533 ']'
00:12:12.286   10:09:07 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/bdevperf.sock
00:12:12.286   10:09:07 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100
00:12:12.286   10:09:07 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/bdevperf.sock...'
00:12:12.286  Waiting for process to start up and listen on UNIX domain socket /tmp/bdevperf.sock...
00:12:12.286   10:09:07 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable
00:12:12.286   10:09:07 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:12:12.286  [2024-11-20 10:09:07.258399] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:12:12.286  [2024-11-20 10:09:07.258566] [ DPDK EAL parameters: bdevperf --no-shconf -c 0xf0 -m 2048 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1789533 ]
00:12:12.286  EAL: No free 2048 kB hugepages reported on node 1
00:12:13.219  [2024-11-20 10:09:08.194311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:12:13.219  [2024-11-20 10:09:08.322016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5
00:12:13.219  [2024-11-20 10:09:08.322068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6
00:12:13.219  [2024-11-20 10:09:08.322115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:12:13.219  [2024-11-20 10:09:08.322120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7
00:12:14.155   10:09:09 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:12:14.155   10:09:09 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@868 -- # return 0
00:12:14.155   10:09:09 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@44 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /tmp/bdevperf.sock bdev_virtio_attach_controller --dev-type scsi --trtype vfio-user --traddr /tmp/vfu_devices/vfu.scsi VirtioScsi0
00:12:14.423  [2024-11-20 10:09:09.296901] tgt_endpoint.c: 167:tgt_accept_poller: *NOTICE*: /tmp/vfu_devices/vfu.scsi: attached successfully
00:12:14.423  [2024-11-20 10:09:09.299107] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:12:14.423  [2024-11-20 10:09:09.300071] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:12:14.423  [2024-11-20 10:09:09.301090] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:12:14.423  [2024-11-20 10:09:09.302089] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:12:14.423  [2024-11-20 10:09:09.303121] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x4000, Offset 0x0, Flags 0xf, Cap offset 32
00:12:14.423  [2024-11-20 10:09:09.303172] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x3000, Map addr 0x7f3f7432f000
00:12:14.423  [2024-11-20 10:09:09.304114] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:12:14.423  [2024-11-20 10:09:09.305122] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:12:14.423  [2024-11-20 10:09:09.306142] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:12:14.423  [2024-11-20 10:09:09.307145] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:12:14.423  [2024-11-20 10:09:09.308149] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:12:14.423  [2024-11-20 10:09:09.309697] vfio_user_pci.c:  65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x80000000
00:12:14.423  [2024-11-20 10:09:09.318909] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /tmp/vfu_devices/vfu.scsi Setup Successfully
00:12:14.423  [2024-11-20 10:09:09.320237] virtio_vfio_user.c:  32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0x0, length 0x4
00:12:14.423  [2024-11-20 10:09:09.321206] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x2000-0x2003, len = 4
00:12:14.423  [2024-11-20 10:09:09.321272] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 0
00:12:14.423  [2024-11-20 10:09:09.322201] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x14-0x14, len = 1
00:12:14.423  [2024-11-20 10:09:09.322236] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_STATUS with 0x0
00:12:14.423  [2024-11-20 10:09:09.322253] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 0, set status 0
00:12:14.423  [2024-11-20 10:09:09.322271] vfu_virtio.c: 190:vfu_virtio_dev_reset: *DEBUG*: device vfu.scsi resetting
00:12:14.423  [2024-11-20 10:09:09.323202] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:12:14.423  [2024-11-20 10:09:09.323227] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x0
00:12:14.423  [2024-11-20 10:09:09.323268] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 0
00:12:14.423  [2024-11-20 10:09:09.324209] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:12:14.423  [2024-11-20 10:09:09.324232] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x0
00:12:14.423  [2024-11-20 10:09:09.324278] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 0
00:12:14.423  [2024-11-20 10:09:09.324307] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 1
00:12:14.423  [2024-11-20 10:09:09.325216] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x14-0x14, len = 1
00:12:14.423  [2024-11-20 10:09:09.325239] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_STATUS with 0x1
00:12:14.423  [2024-11-20 10:09:09.325252] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 0, set status 1
00:12:14.423  [2024-11-20 10:09:09.326235] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:12:14.423  [2024-11-20 10:09:09.326253] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x1
00:12:14.423  [2024-11-20 10:09:09.326297] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 1
00:12:14.423  [2024-11-20 10:09:09.327238] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:12:14.423  [2024-11-20 10:09:09.327257] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x1
00:12:14.423  [2024-11-20 10:09:09.327294] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 1
00:12:14.423  [2024-11-20 10:09:09.327330] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 3
00:12:14.423  [2024-11-20 10:09:09.328249] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x14-0x14, len = 1
00:12:14.423  [2024-11-20 10:09:09.328267] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_STATUS with 0x3
00:12:14.423  [2024-11-20 10:09:09.328283] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 1, set status 3
00:12:14.423  [2024-11-20 10:09:09.329247] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:12:14.423  [2024-11-20 10:09:09.329270] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x3
00:12:14.423  [2024-11-20 10:09:09.329309] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 3
00:12:14.423  [2024-11-20 10:09:09.330266] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x0-0x3, len = 4
00:12:14.423  [2024-11-20 10:09:09.330292] vfu_virtio.c: 937:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_DFSELECT with 0x0
00:12:14.423  [2024-11-20 10:09:09.331274] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x4-0x7, len = 4
00:12:14.423  [2024-11-20 10:09:09.331299] vfu_virtio.c:1072:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_DF_LO with 0x10000007
00:12:14.423  [2024-11-20 10:09:09.332277] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x0-0x3, len = 4
00:12:14.423  [2024-11-20 10:09:09.332301] vfu_virtio.c: 937:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_DFSELECT with 0x1
00:12:14.423  [2024-11-20 10:09:09.333290] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x4-0x7, len = 4
00:12:14.423  [2024-11-20 10:09:09.333319] vfu_virtio.c:1067:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_DF_HI with 0x5
00:12:14.423  [2024-11-20 10:09:09.333365] virtio_vfio_user.c: 127:virtio_vfio_user_get_features: *DEBUG*: feature_hi 0x5, feature_low 0x10000007
00:12:14.423  [2024-11-20 10:09:09.334300] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x8-0xB, len = 4
00:12:14.423  [2024-11-20 10:09:09.334324] vfu_virtio.c: 943:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_GFSELECT with 0x0
00:12:14.423  [2024-11-20 10:09:09.335304] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0xC-0xF, len = 4
00:12:14.423  [2024-11-20 10:09:09.335329] vfu_virtio.c: 956:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_GF_LO with 0x3
00:12:14.423  [2024-11-20 10:09:09.335344] vfu_virtio.c: 255:virtio_dev_set_features: *DEBUG*: vfu.scsi: negotiated features 0x3
00:12:14.423  [2024-11-20 10:09:09.336313] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x8-0xB, len = 4
00:12:14.423  [2024-11-20 10:09:09.336332] vfu_virtio.c: 943:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_GFSELECT with 0x1
00:12:14.423  [2024-11-20 10:09:09.337326] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0xC-0xF, len = 4
00:12:14.423  [2024-11-20 10:09:09.337346] vfu_virtio.c: 951:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_GF_HI with 0x1
00:12:14.423  [2024-11-20 10:09:09.337363] vfu_virtio.c: 255:virtio_dev_set_features: *DEBUG*: vfu.scsi: negotiated features 0x100000003
00:12:14.424  [2024-11-20 10:09:09.337400] virtio_vfio_user.c: 176:virtio_vfio_user_set_features: *DEBUG*: features 0x100000003
00:12:14.424  [2024-11-20 10:09:09.338325] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:12:14.424  [2024-11-20 10:09:09.338348] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x3
00:12:14.424  [2024-11-20 10:09:09.338399] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 3
00:12:14.424  [2024-11-20 10:09:09.338428] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status b
00:12:14.424  [2024-11-20 10:09:09.339340] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x14-0x14, len = 1
00:12:14.424  [2024-11-20 10:09:09.339365] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_STATUS with 0xb
00:12:14.424  [2024-11-20 10:09:09.339379] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 3, set status b
00:12:14.424  [2024-11-20 10:09:09.340364] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:12:14.424  [2024-11-20 10:09:09.340383] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0xb
00:12:14.424  [2024-11-20 10:09:09.340426] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status b
00:12:14.424  [2024-11-20 10:09:09.341371] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:12:14.424  [2024-11-20 10:09:09.341390] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x0
00:12:14.424  [2024-11-20 10:09:09.342379] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x18-0x19, len = 2
00:12:14.424  [2024-11-20 10:09:09.342400] vfu_virtio.c:1135:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ queue 0 PCI_COMMON_Q_SIZE with 0x100
00:12:14.424  [2024-11-20 10:09:09.342440] virtio_vfio_user.c: 216:virtio_vfio_user_get_queue_size: *DEBUG*: queue 0, size 256
00:12:14.424  [2024-11-20 10:09:09.343385] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:12:14.424  [2024-11-20 10:09:09.343403] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x0
00:12:14.424  [2024-11-20 10:09:09.344390] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x20-0x23, len = 4
00:12:14.424  [2024-11-20 10:09:09.344410] vfu_virtio.c:1020:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 0 PCI_COMMON_Q_DESCLO with 0x6a2ec000
00:12:14.424  [2024-11-20 10:09:09.345394] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x24-0x27, len = 4
00:12:14.424  [2024-11-20 10:09:09.345414] vfu_virtio.c:1025:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 0 PCI_COMMON_Q_DESCHI with 0x2000
00:12:14.424  [2024-11-20 10:09:09.346409] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x28-0x2B, len = 4
00:12:14.424  [2024-11-20 10:09:09.346428] vfu_virtio.c:1030:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 0 PCI_COMMON_Q_AVAILLO with 0x6a2ed000
00:12:14.424  [2024-11-20 10:09:09.347414] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x2C-0x2F, len = 4
00:12:14.424  [2024-11-20 10:09:09.347433] vfu_virtio.c:1035:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 0 PCI_COMMON_Q_AVAILHI with 0x2000
00:12:14.424  [2024-11-20 10:09:09.348425] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x30-0x33, len = 4
00:12:14.424  [2024-11-20 10:09:09.348444] vfu_virtio.c:1040:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 0 PCI_COMMON_Q_USEDLO with 0x6a2ee000
00:12:14.424  [2024-11-20 10:09:09.349423] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x34-0x37, len = 4
00:12:14.424  [2024-11-20 10:09:09.349443] vfu_virtio.c:1045:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 0 PCI_COMMON_Q_USEDHI with 0x2000
00:12:14.424  [2024-11-20 10:09:09.350431] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x1E-0x1F, len = 2
00:12:14.424  [2024-11-20 10:09:09.350450] vfu_virtio.c:1123:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_Q_NOFF with 0x0
00:12:14.424  [2024-11-20 10:09:09.351436] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:12:14.424  [2024-11-20 10:09:09.351455] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x1
00:12:14.424  [2024-11-20 10:09:09.351472] vfu_virtio.c: 267:virtio_dev_enable_vq: *DEBUG*: vfu.scsi: enable vq 0
00:12:14.424  [2024-11-20 10:09:09.351486] vfu_virtio.c:  71:virtio_dev_map_vq: *DEBUG*: vfu.scsi: try to map vq 0
00:12:14.424  [2024-11-20 10:09:09.351537] vfu_virtio.c: 107:virtio_dev_map_vq: *DEBUG*: vfu.scsi: map vq 0 successfully
00:12:14.424  [2024-11-20 10:09:09.351588] virtio_vfio_user.c: 331:virtio_vfio_user_setup_queue: *DEBUG*: queue 0 addresses:
00:12:14.424  [2024-11-20 10:09:09.351617] virtio_vfio_user.c: 332:virtio_vfio_user_setup_queue: *DEBUG*: 	 desc_addr: 20006a2ec000
00:12:14.424  [2024-11-20 10:09:09.351634] virtio_vfio_user.c: 333:virtio_vfio_user_setup_queue: *DEBUG*: 	 aval_addr: 20006a2ed000
00:12:14.424  [2024-11-20 10:09:09.351647] virtio_vfio_user.c: 334:virtio_vfio_user_setup_queue: *DEBUG*: 	 used_addr: 20006a2ee000
00:12:14.424  [2024-11-20 10:09:09.352438] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:12:14.424  [2024-11-20 10:09:09.352461] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x1
00:12:14.424  [2024-11-20 10:09:09.353459] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x18-0x19, len = 2
00:12:14.424  [2024-11-20 10:09:09.353486] vfu_virtio.c:1135:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ queue 1 PCI_COMMON_Q_SIZE with 0x100
00:12:14.424  [2024-11-20 10:09:09.353538] virtio_vfio_user.c: 216:virtio_vfio_user_get_queue_size: *DEBUG*: queue 1, size 256
00:12:14.424  [2024-11-20 10:09:09.354464] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:12:14.424  [2024-11-20 10:09:09.354492] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x1
00:12:14.424  [2024-11-20 10:09:09.355468] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x20-0x23, len = 4
00:12:14.424  [2024-11-20 10:09:09.355492] vfu_virtio.c:1020:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 1 PCI_COMMON_Q_DESCLO with 0x6a2e8000
00:12:14.424  [2024-11-20 10:09:09.356475] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x24-0x27, len = 4
00:12:14.424  [2024-11-20 10:09:09.356505] vfu_virtio.c:1025:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 1 PCI_COMMON_Q_DESCHI with 0x2000
00:12:14.424  [2024-11-20 10:09:09.357505] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x28-0x2B, len = 4
00:12:14.424  [2024-11-20 10:09:09.357530] vfu_virtio.c:1030:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 1 PCI_COMMON_Q_AVAILLO with 0x6a2e9000
00:12:14.424  [2024-11-20 10:09:09.358488] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x2C-0x2F, len = 4
00:12:14.424  [2024-11-20 10:09:09.358519] vfu_virtio.c:1035:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 1 PCI_COMMON_Q_AVAILHI with 0x2000
00:12:14.424  [2024-11-20 10:09:09.359496] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x30-0x33, len = 4
00:12:14.424  [2024-11-20 10:09:09.359526] vfu_virtio.c:1040:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 1 PCI_COMMON_Q_USEDLO with 0x6a2ea000
00:12:14.424  [2024-11-20 10:09:09.360512] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x34-0x37, len = 4
00:12:14.424  [2024-11-20 10:09:09.360536] vfu_virtio.c:1045:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 1 PCI_COMMON_Q_USEDHI with 0x2000
00:12:14.424  [2024-11-20 10:09:09.361524] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x1E-0x1F, len = 2
00:12:14.424  [2024-11-20 10:09:09.361547] vfu_virtio.c:1123:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_Q_NOFF with 0x1
00:12:14.424  [2024-11-20 10:09:09.362534] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:12:14.424  [2024-11-20 10:09:09.362563] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x1
00:12:14.424  [2024-11-20 10:09:09.362576] vfu_virtio.c: 267:virtio_dev_enable_vq: *DEBUG*: vfu.scsi: enable vq 1
00:12:14.424  [2024-11-20 10:09:09.362589] vfu_virtio.c:  71:virtio_dev_map_vq: *DEBUG*: vfu.scsi: try to map vq 1
00:12:14.424  [2024-11-20 10:09:09.362603] vfu_virtio.c: 107:virtio_dev_map_vq: *DEBUG*: vfu.scsi: map vq 1 successfully
00:12:14.424  [2024-11-20 10:09:09.362640] virtio_vfio_user.c: 331:virtio_vfio_user_setup_queue: *DEBUG*: queue 1 addresses:
00:12:14.424  [2024-11-20 10:09:09.362675] virtio_vfio_user.c: 332:virtio_vfio_user_setup_queue: *DEBUG*: 	 desc_addr: 20006a2e8000
00:12:14.424  [2024-11-20 10:09:09.362689] virtio_vfio_user.c: 333:virtio_vfio_user_setup_queue: *DEBUG*: 	 aval_addr: 20006a2e9000
00:12:14.424  [2024-11-20 10:09:09.362703] virtio_vfio_user.c: 334:virtio_vfio_user_setup_queue: *DEBUG*: 	 used_addr: 20006a2ea000
00:12:14.424  [2024-11-20 10:09:09.363548] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:12:14.424  [2024-11-20 10:09:09.363566] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x2
00:12:14.424  [2024-11-20 10:09:09.364560] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x18-0x19, len = 2
00:12:14.424  [2024-11-20 10:09:09.364583] vfu_virtio.c:1135:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ queue 2 PCI_COMMON_Q_SIZE with 0x100
00:12:14.424  [2024-11-20 10:09:09.364621] virtio_vfio_user.c: 216:virtio_vfio_user_get_queue_size: *DEBUG*: queue 2, size 256
00:12:14.424  [2024-11-20 10:09:09.365569] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:12:14.424  [2024-11-20 10:09:09.365587] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x2
00:12:14.424  [2024-11-20 10:09:09.366577] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x20-0x23, len = 4
00:12:14.424  [2024-11-20 10:09:09.366596] vfu_virtio.c:1020:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 2 PCI_COMMON_Q_DESCLO with 0x6a2e4000
00:12:14.424  [2024-11-20 10:09:09.367582] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x24-0x27, len = 4
00:12:14.424  [2024-11-20 10:09:09.367601] vfu_virtio.c:1025:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 2 PCI_COMMON_Q_DESCHI with 0x2000
00:12:14.424  [2024-11-20 10:09:09.368584] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x28-0x2B, len = 4
00:12:14.424  [2024-11-20 10:09:09.368603] vfu_virtio.c:1030:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 2 PCI_COMMON_Q_AVAILLO with 0x6a2e5000
00:12:14.424  [2024-11-20 10:09:09.369596] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x2C-0x2F, len = 4
00:12:14.424  [2024-11-20 10:09:09.369614] vfu_virtio.c:1035:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 2 PCI_COMMON_Q_AVAILHI with 0x2000
00:12:14.424  [2024-11-20 10:09:09.370599] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x30-0x33, len = 4
00:12:14.424  [2024-11-20 10:09:09.370618] vfu_virtio.c:1040:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 2 PCI_COMMON_Q_USEDLO with 0x6a2e6000
00:12:14.424  [2024-11-20 10:09:09.371602] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x34-0x37, len = 4
00:12:14.424  [2024-11-20 10:09:09.371621] vfu_virtio.c:1045:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 2 PCI_COMMON_Q_USEDHI with 0x2000
00:12:14.424  [2024-11-20 10:09:09.372613] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x1E-0x1F, len = 2
00:12:14.425  [2024-11-20 10:09:09.372631] vfu_virtio.c:1123:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_Q_NOFF with 0x2
00:12:14.425  [2024-11-20 10:09:09.373616] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:12:14.425  [2024-11-20 10:09:09.373634] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x1
00:12:14.425  [2024-11-20 10:09:09.373650] vfu_virtio.c: 267:virtio_dev_enable_vq: *DEBUG*: vfu.scsi: enable vq 2
00:12:14.425  [2024-11-20 10:09:09.373661] vfu_virtio.c:  71:virtio_dev_map_vq: *DEBUG*: vfu.scsi: try to map vq 2
00:12:14.425  [2024-11-20 10:09:09.373679] vfu_virtio.c: 107:virtio_dev_map_vq: *DEBUG*: vfu.scsi: map vq 2 successfully
00:12:14.425  [2024-11-20 10:09:09.373723] virtio_vfio_user.c: 331:virtio_vfio_user_setup_queue: *DEBUG*: queue 2 addresses:
00:12:14.425  [2024-11-20 10:09:09.373751] virtio_vfio_user.c: 332:virtio_vfio_user_setup_queue: *DEBUG*: 	 desc_addr: 20006a2e4000
00:12:14.425  [2024-11-20 10:09:09.373770] virtio_vfio_user.c: 333:virtio_vfio_user_setup_queue: *DEBUG*: 	 aval_addr: 20006a2e5000
00:12:14.425  [2024-11-20 10:09:09.373782] virtio_vfio_user.c: 334:virtio_vfio_user_setup_queue: *DEBUG*: 	 used_addr: 20006a2e6000
00:12:14.425  [2024-11-20 10:09:09.374619] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:12:14.425  [2024-11-20 10:09:09.374645] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x3
00:12:14.425  [2024-11-20 10:09:09.375625] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x18-0x19, len = 2
00:12:14.425  [2024-11-20 10:09:09.375660] vfu_virtio.c:1135:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ queue 3 PCI_COMMON_Q_SIZE with 0x100
00:12:14.425  [2024-11-20 10:09:09.375706] virtio_vfio_user.c: 216:virtio_vfio_user_get_queue_size: *DEBUG*: queue 3, size 256
00:12:14.425  [2024-11-20 10:09:09.376633] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:12:14.425  [2024-11-20 10:09:09.376656] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x3
00:12:14.425  [2024-11-20 10:09:09.377656] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x20-0x23, len = 4
00:12:14.425  [2024-11-20 10:09:09.377680] vfu_virtio.c:1020:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 3 PCI_COMMON_Q_DESCLO with 0x6a2e0000
00:12:14.425  [2024-11-20 10:09:09.378655] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x24-0x27, len = 4
00:12:14.425  [2024-11-20 10:09:09.378679] vfu_virtio.c:1025:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 3 PCI_COMMON_Q_DESCHI with 0x2000
00:12:14.425  [2024-11-20 10:09:09.379663] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x28-0x2B, len = 4
00:12:14.425  [2024-11-20 10:09:09.379686] vfu_virtio.c:1030:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 3 PCI_COMMON_Q_AVAILLO with 0x6a2e1000
00:12:14.425  [2024-11-20 10:09:09.380671] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x2C-0x2F, len = 4
00:12:14.425  [2024-11-20 10:09:09.380695] vfu_virtio.c:1035:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 3 PCI_COMMON_Q_AVAILHI with 0x2000
00:12:14.425  [2024-11-20 10:09:09.381683] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x30-0x33, len = 4
00:12:14.425  [2024-11-20 10:09:09.381706] vfu_virtio.c:1040:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 3 PCI_COMMON_Q_USEDLO with 0x6a2e2000
00:12:14.425  [2024-11-20 10:09:09.382694] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x34-0x37, len = 4
00:12:14.425  [2024-11-20 10:09:09.382718] vfu_virtio.c:1045:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 3 PCI_COMMON_Q_USEDHI with 0x2000
00:12:14.425  [2024-11-20 10:09:09.383703] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x1E-0x1F, len = 2
00:12:14.425  [2024-11-20 10:09:09.383731] vfu_virtio.c:1123:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_Q_NOFF with 0x3
00:12:14.425  [2024-11-20 10:09:09.384713] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:12:14.425  [2024-11-20 10:09:09.384738] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x1
00:12:14.425  [2024-11-20 10:09:09.384751] vfu_virtio.c: 267:virtio_dev_enable_vq: *DEBUG*: vfu.scsi: enable vq 3
00:12:14.425  [2024-11-20 10:09:09.384765] vfu_virtio.c:  71:virtio_dev_map_vq: *DEBUG*: vfu.scsi: try to map vq 3
00:12:14.425  [2024-11-20 10:09:09.384778] vfu_virtio.c: 107:virtio_dev_map_vq: *DEBUG*: vfu.scsi: map vq 3 successfully
00:12:14.425  [2024-11-20 10:09:09.384814] virtio_vfio_user.c: 331:virtio_vfio_user_setup_queue: *DEBUG*: queue 3 addresses:
00:12:14.425  [2024-11-20 10:09:09.384849] virtio_vfio_user.c: 332:virtio_vfio_user_setup_queue: *DEBUG*: 	 desc_addr: 20006a2e0000
00:12:14.425  [2024-11-20 10:09:09.384863] virtio_vfio_user.c: 333:virtio_vfio_user_setup_queue: *DEBUG*: 	 aval_addr: 20006a2e1000
00:12:14.425  [2024-11-20 10:09:09.384876] virtio_vfio_user.c: 334:virtio_vfio_user_setup_queue: *DEBUG*: 	 used_addr: 20006a2e2000
00:12:14.425  [2024-11-20 10:09:09.385722] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:12:14.425  [2024-11-20 10:09:09.385741] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0xb
00:12:14.425  [2024-11-20 10:09:09.385784] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status b
00:12:14.425  [2024-11-20 10:09:09.385822] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status f
00:12:14.425  [2024-11-20 10:09:09.386728] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x14-0x14, len = 1
00:12:14.425  [2024-11-20 10:09:09.386747] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_STATUS with 0xf
00:12:14.425  [2024-11-20 10:09:09.386762] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status b, set status f
00:12:14.425  [2024-11-20 10:09:09.386775] vfu_virtio.c:1365:vfu_virtio_dev_start: *DEBUG*: start vfu.scsi
00:12:14.425  [2024-11-20 10:09:09.389174] vfu_virtio.c:1377:vfu_virtio_dev_start: *DEBUG*: vfu.scsi is started with ret 0
00:12:14.425  [2024-11-20 10:09:09.390249] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:12:14.425  [2024-11-20 10:09:09.390275] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0xf
00:12:14.425  [2024-11-20 10:09:09.390315] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status f
00:12:14.425  VirtioScsi0t0 VirtioScsi0t1
00:12:14.425   10:09:09 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@46 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /tmp/bdevperf.sock bdev_virtio_attach_controller --dev-type blk --trtype vfio-user --traddr /tmp/vfu_devices/vfu.blk VirtioBlk0
00:12:14.684  [2024-11-20 10:09:09.669758] tgt_endpoint.c: 167:tgt_accept_poller: *NOTICE*: /tmp/vfu_devices/vfu.blk: attached successfully
00:12:14.684  [2024-11-20 10:09:09.671927] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:12:14.684  [2024-11-20 10:09:09.672923] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:12:14.684  [2024-11-20 10:09:09.673951] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:12:14.684  [2024-11-20 10:09:09.674959] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:12:14.684  [2024-11-20 10:09:09.675973] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x4000, Offset 0x0, Flags 0xf, Cap offset 32
00:12:14.684  [2024-11-20 10:09:09.676023] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x3000, Map addr 0x7f3f7430b000
00:12:14.684  [2024-11-20 10:09:09.676982] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:12:14.684  [2024-11-20 10:09:09.677980] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:12:14.684  [2024-11-20 10:09:09.679028] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:12:14.684  [2024-11-20 10:09:09.680005] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:12:14.684  [2024-11-20 10:09:09.681010] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:12:14.684  [2024-11-20 10:09:09.682511] vfio_user_pci.c:  65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x80000000
00:12:14.684  [2024-11-20 10:09:09.691552] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user1, Path /tmp/vfu_devices/vfu.blk Setup Successfully
00:12:14.684  [2024-11-20 10:09:09.693096] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 0
00:12:14.684  [2024-11-20 10:09:09.694064] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x14-0x14, len = 1
00:12:14.684  [2024-11-20 10:09:09.694094] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_STATUS with 0x0
00:12:14.684  [2024-11-20 10:09:09.694120] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 0, set status 0
00:12:14.684  [2024-11-20 10:09:09.694135] vfu_virtio.c: 190:vfu_virtio_dev_reset: *DEBUG*: device vfu.blk resetting
00:12:14.684  [2024-11-20 10:09:09.695075] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:12:14.684  [2024-11-20 10:09:09.695097] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x0
00:12:14.684  [2024-11-20 10:09:09.695142] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 0
00:12:14.684  [2024-11-20 10:09:09.696078] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:12:14.684  [2024-11-20 10:09:09.696097] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x0
00:12:14.684  [2024-11-20 10:09:09.696133] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 0
00:12:14.684  [2024-11-20 10:09:09.696170] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 1
00:12:14.684  [2024-11-20 10:09:09.697093] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x14-0x14, len = 1
00:12:14.684  [2024-11-20 10:09:09.697111] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_STATUS with 0x1
00:12:14.684  [2024-11-20 10:09:09.697127] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 0, set status 1
00:12:14.684  [2024-11-20 10:09:09.698096] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:12:14.684  [2024-11-20 10:09:09.698128] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x1
00:12:14.684  [2024-11-20 10:09:09.698166] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 1
00:12:14.684  [2024-11-20 10:09:09.699107] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:12:14.684  [2024-11-20 10:09:09.699130] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x1
00:12:14.684  [2024-11-20 10:09:09.699176] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 1
00:12:14.684  [2024-11-20 10:09:09.699205] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 3
00:12:14.684  [2024-11-20 10:09:09.700114] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x14-0x14, len = 1
00:12:14.684  [2024-11-20 10:09:09.700137] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_STATUS with 0x3
00:12:14.684  [2024-11-20 10:09:09.700150] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 1, set status 3
00:12:14.684  [2024-11-20 10:09:09.701131] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:12:14.684  [2024-11-20 10:09:09.701149] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x3
00:12:14.684  [2024-11-20 10:09:09.701193] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 3
00:12:14.684  [2024-11-20 10:09:09.702135] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x0-0x3, len = 4
00:12:14.684  [2024-11-20 10:09:09.702154] vfu_virtio.c: 937:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_DFSELECT with 0x0
00:12:14.684  [2024-11-20 10:09:09.703146] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x4-0x7, len = 4
00:12:14.684  [2024-11-20 10:09:09.703166] vfu_virtio.c:1072:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_DF_LO with 0x10007646
00:12:14.685  [2024-11-20 10:09:09.704168] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x0-0x3, len = 4
00:12:14.685  [2024-11-20 10:09:09.704187] vfu_virtio.c: 937:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_DFSELECT with 0x1
00:12:14.685  [2024-11-20 10:09:09.705161] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x4-0x7, len = 4
00:12:14.685  [2024-11-20 10:09:09.705180] vfu_virtio.c:1067:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_DF_HI with 0x5
00:12:14.685  [2024-11-20 10:09:09.705218] virtio_vfio_user.c: 127:virtio_vfio_user_get_features: *DEBUG*: feature_hi 0x5, feature_low 0x10007646
00:12:14.685  [2024-11-20 10:09:09.706176] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x8-0xB, len = 4
00:12:14.685  [2024-11-20 10:09:09.706196] vfu_virtio.c: 943:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_GFSELECT with 0x0
00:12:14.685  [2024-11-20 10:09:09.707177] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0xC-0xF, len = 4
00:12:14.685  [2024-11-20 10:09:09.707197] vfu_virtio.c: 956:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_GF_LO with 0x3446
00:12:14.685  [2024-11-20 10:09:09.707214] vfu_virtio.c: 255:virtio_dev_set_features: *DEBUG*: vfu.blk: negotiated features 0x3446
00:12:14.685  [2024-11-20 10:09:09.708186] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x8-0xB, len = 4
00:12:14.685  [2024-11-20 10:09:09.708210] vfu_virtio.c: 943:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_GFSELECT with 0x1
00:12:14.685  [2024-11-20 10:09:09.709198] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0xC-0xF, len = 4
00:12:14.685  [2024-11-20 10:09:09.709222] vfu_virtio.c: 951:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_GF_HI with 0x1
00:12:14.685  [2024-11-20 10:09:09.709237] vfu_virtio.c: 255:virtio_dev_set_features: *DEBUG*: vfu.blk: negotiated features 0x100003446
00:12:14.685  [2024-11-20 10:09:09.709281] virtio_vfio_user.c: 176:virtio_vfio_user_set_features: *DEBUG*: features 0x100003446
00:12:14.685  [2024-11-20 10:09:09.710212] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:12:14.685  [2024-11-20 10:09:09.710231] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x3
00:12:14.685  [2024-11-20 10:09:09.710268] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 3
00:12:14.685  [2024-11-20 10:09:09.710304] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status b
00:12:14.685  [2024-11-20 10:09:09.711225] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x14-0x14, len = 1
00:12:14.685  [2024-11-20 10:09:09.711243] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_STATUS with 0xb
00:12:14.685  [2024-11-20 10:09:09.711261] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 3, set status b
00:12:14.685  [2024-11-20 10:09:09.712227] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:12:14.685  [2024-11-20 10:09:09.712253] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0xb
00:12:14.685  [2024-11-20 10:09:09.712306] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status b
00:12:14.685  [2024-11-20 10:09:09.712345] virtio_vfio_user.c:  32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0x22, length 0x2
00:12:14.685  [2024-11-20 10:09:09.713241] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x2022-0x2023, len = 2
00:12:14.685  [2024-11-20 10:09:09.713286] virtio_vfio_user.c:  32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0x14, length 0x4
00:12:14.685  [2024-11-20 10:09:09.714248] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x2014-0x2017, len = 4
00:12:14.685  [2024-11-20 10:09:09.714297] virtio_vfio_user.c:  32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0x0, length 0x8
00:12:14.685  [2024-11-20 10:09:09.715249] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x2000-0x2007, len = 8
00:12:14.685  [2024-11-20 10:09:09.715296] virtio_vfio_user.c:  32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0x22, length 0x2
00:12:14.685  [2024-11-20 10:09:09.716256] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x2022-0x2023, len = 2
00:12:14.685  [2024-11-20 10:09:09.716304] virtio_vfio_user.c:  32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0x8, length 0x4
00:12:14.685  [2024-11-20 10:09:09.717269] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x2008-0x200B, len = 4
00:12:14.685  [2024-11-20 10:09:09.717320] virtio_vfio_user.c:  32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0xc, length 0x4
00:12:14.685  [2024-11-20 10:09:09.718286] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x200C-0x200F, len = 4
00:12:14.685  [2024-11-20 10:09:09.719299] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x16-0x17, len = 2
00:12:14.685  [2024-11-20 10:09:09.719323] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_SELECT with 0x0
00:12:14.685  [2024-11-20 10:09:09.720310] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x18-0x19, len = 2
00:12:14.685  [2024-11-20 10:09:09.720335] vfu_virtio.c:1135:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ queue 0 PCI_COMMON_Q_SIZE with 0x100
00:12:14.685  [2024-11-20 10:09:09.720381] virtio_vfio_user.c: 216:virtio_vfio_user_get_queue_size: *DEBUG*: queue 0, size 256
00:12:14.685  [2024-11-20 10:09:09.721315] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x16-0x17, len = 2
00:12:14.685  [2024-11-20 10:09:09.721338] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_SELECT with 0x0
00:12:14.685  [2024-11-20 10:09:09.722321] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x20-0x23, len = 4
00:12:14.685  [2024-11-20 10:09:09.722346] vfu_virtio.c:1020:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 0 PCI_COMMON_Q_DESCLO with 0x6a2dc000
00:12:14.685  [2024-11-20 10:09:09.723322] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x24-0x27, len = 4
00:12:14.685  [2024-11-20 10:09:09.723350] vfu_virtio.c:1025:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 0 PCI_COMMON_Q_DESCHI with 0x2000
00:12:14.685  [2024-11-20 10:09:09.724338] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x28-0x2B, len = 4
00:12:14.685  [2024-11-20 10:09:09.724364] vfu_virtio.c:1030:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 0 PCI_COMMON_Q_AVAILLO with 0x6a2dd000
00:12:14.685  [2024-11-20 10:09:09.725345] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x2C-0x2F, len = 4
00:12:14.685  [2024-11-20 10:09:09.725369] vfu_virtio.c:1035:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 0 PCI_COMMON_Q_AVAILHI with 0x2000
00:12:14.685  [2024-11-20 10:09:09.726352] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x30-0x33, len = 4
00:12:14.685  [2024-11-20 10:09:09.726376] vfu_virtio.c:1040:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 0 PCI_COMMON_Q_USEDLO with 0x6a2de000
00:12:14.685  [2024-11-20 10:09:09.727369] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x34-0x37, len = 4
00:12:14.685  [2024-11-20 10:09:09.727393] vfu_virtio.c:1045:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 0 PCI_COMMON_Q_USEDHI with 0x2000
00:12:14.685  [2024-11-20 10:09:09.728380] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x1E-0x1F, len = 2
00:12:14.685  [2024-11-20 10:09:09.728405] vfu_virtio.c:1123:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_Q_NOFF with 0x0
00:12:14.685  [2024-11-20 10:09:09.729389] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x1C-0x1D, len = 2
00:12:14.685  [2024-11-20 10:09:09.729413] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_ENABLE with 0x1
00:12:14.685  [2024-11-20 10:09:09.729429] vfu_virtio.c: 267:virtio_dev_enable_vq: *DEBUG*: vfu.blk: enable vq 0
00:12:14.685  [2024-11-20 10:09:09.729445] vfu_virtio.c:  71:virtio_dev_map_vq: *DEBUG*: vfu.blk: try to map vq 0
00:12:14.685  [2024-11-20 10:09:09.729476] vfu_virtio.c: 107:virtio_dev_map_vq: *DEBUG*: vfu.blk: map vq 0 successfully
00:12:14.685  [2024-11-20 10:09:09.729529] virtio_vfio_user.c: 331:virtio_vfio_user_setup_queue: *DEBUG*: queue 0 addresses:
00:12:14.685  [2024-11-20 10:09:09.729570] virtio_vfio_user.c: 332:virtio_vfio_user_setup_queue: *DEBUG*: 	 desc_addr: 20006a2dc000
00:12:14.685  [2024-11-20 10:09:09.729589] virtio_vfio_user.c: 333:virtio_vfio_user_setup_queue: *DEBUG*: 	 aval_addr: 20006a2dd000
00:12:14.685  [2024-11-20 10:09:09.729604] virtio_vfio_user.c: 334:virtio_vfio_user_setup_queue: *DEBUG*: 	 used_addr: 20006a2de000
00:12:14.685  [2024-11-20 10:09:09.730407] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x16-0x17, len = 2
00:12:14.685  [2024-11-20 10:09:09.730426] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_SELECT with 0x1
00:12:14.685  [2024-11-20 10:09:09.731409] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x18-0x19, len = 2
00:12:14.686  [2024-11-20 10:09:09.731429] vfu_virtio.c:1135:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ queue 1 PCI_COMMON_Q_SIZE with 0x100
00:12:14.686  [2024-11-20 10:09:09.731465] virtio_vfio_user.c: 216:virtio_vfio_user_get_queue_size: *DEBUG*: queue 1, size 256
00:12:14.686  [2024-11-20 10:09:09.732414] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x16-0x17, len = 2
00:12:14.686  [2024-11-20 10:09:09.732433] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_SELECT with 0x1
00:12:14.686  [2024-11-20 10:09:09.733423] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x20-0x23, len = 4
00:12:14.686  [2024-11-20 10:09:09.733442] vfu_virtio.c:1020:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 1 PCI_COMMON_Q_DESCLO with 0x6a2d8000
00:12:14.686  [2024-11-20 10:09:09.734433] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x24-0x27, len = 4
00:12:14.686  [2024-11-20 10:09:09.734453] vfu_virtio.c:1025:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 1 PCI_COMMON_Q_DESCHI with 0x2000
00:12:14.686  [2024-11-20 10:09:09.735450] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x28-0x2B, len = 4
00:12:14.686  [2024-11-20 10:09:09.735469] vfu_virtio.c:1030:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 1 PCI_COMMON_Q_AVAILLO with 0x6a2d9000
00:12:14.686  [2024-11-20 10:09:09.736452] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x2C-0x2F, len = 4
00:12:14.686  [2024-11-20 10:09:09.736471] vfu_virtio.c:1035:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 1 PCI_COMMON_Q_AVAILHI with 0x2000
00:12:14.686  [2024-11-20 10:09:09.737466] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x30-0x33, len = 4
00:12:14.686  [2024-11-20 10:09:09.737485] vfu_virtio.c:1040:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 1 PCI_COMMON_Q_USEDLO with 0x6a2da000
00:12:14.686  [2024-11-20 10:09:09.738481] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x34-0x37, len = 4
00:12:14.686  [2024-11-20 10:09:09.738508] vfu_virtio.c:1045:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 1 PCI_COMMON_Q_USEDHI with 0x2000
00:12:14.686  [2024-11-20 10:09:09.739488] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x1E-0x1F, len = 2
00:12:14.686  [2024-11-20 10:09:09.739514] vfu_virtio.c:1123:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_Q_NOFF with 0x1
00:12:14.686  [2024-11-20 10:09:09.740507] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x1C-0x1D, len = 2
00:12:14.686  [2024-11-20 10:09:09.740526] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_ENABLE with 0x1
00:12:14.686  [2024-11-20 10:09:09.740542] vfu_virtio.c: 267:virtio_dev_enable_vq: *DEBUG*: vfu.blk: enable vq 1
00:12:14.686  [2024-11-20 10:09:09.740553] vfu_virtio.c:  71:virtio_dev_map_vq: *DEBUG*: vfu.blk: try to map vq 1
00:12:14.686  [2024-11-20 10:09:09.740570] vfu_virtio.c: 107:virtio_dev_map_vq: *DEBUG*: vfu.blk: map vq 1 successfully
00:12:14.686  [2024-11-20 10:09:09.740619] virtio_vfio_user.c: 331:virtio_vfio_user_setup_queue: *DEBUG*: queue 1 addresses:
00:12:14.686  [2024-11-20 10:09:09.740647] virtio_vfio_user.c: 332:virtio_vfio_user_setup_queue: *DEBUG*: 	 desc_addr: 20006a2d8000
00:12:14.686  [2024-11-20 10:09:09.740662] virtio_vfio_user.c: 333:virtio_vfio_user_setup_queue: *DEBUG*: 	 aval_addr: 20006a2d9000
00:12:14.686  [2024-11-20 10:09:09.740677] virtio_vfio_user.c: 334:virtio_vfio_user_setup_queue: *DEBUG*: 	 used_addr: 20006a2da000
00:12:14.686  [2024-11-20 10:09:09.741518] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:12:14.686  [2024-11-20 10:09:09.741542] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0xb
00:12:14.686  [2024-11-20 10:09:09.741587] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status b
00:12:14.686  [2024-11-20 10:09:09.741619] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status f
00:12:14.686  [2024-11-20 10:09:09.742530] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x14-0x14, len = 1
00:12:14.686  [2024-11-20 10:09:09.742554] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_STATUS with 0xf
00:12:14.686  [2024-11-20 10:09:09.742568] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status b, set status f
00:12:14.686  [2024-11-20 10:09:09.742582] vfu_virtio.c:1365:vfu_virtio_dev_start: *DEBUG*: start vfu.blk
00:12:14.686  [2024-11-20 10:09:09.744849] vfu_virtio.c:1377:vfu_virtio_dev_start: *DEBUG*: vfu.blk is started with ret 0
00:12:14.686  [2024-11-20 10:09:09.745943] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:12:14.686  [2024-11-20 10:09:09.745964] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0xf
00:12:14.686  [2024-11-20 10:09:09.746008] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status f
00:12:14.686  VirtioBlk0
00:12:14.686   10:09:09 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@50 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /tmp/bdevperf.sock perform_tests
00:12:14.944  Running I/O for 30 seconds...
00:12:16.833      89616.00 IOPS,   350.06 MiB/s
[2024-11-20T09:09:13.325Z]     89661.00 IOPS,   350.24 MiB/s
[2024-11-20T09:09:14.259Z]     89631.33 IOPS,   350.12 MiB/s
[2024-11-20T09:09:15.191Z]     89672.00 IOPS,   350.28 MiB/s
[2024-11-20T09:09:16.121Z]     89648.80 IOPS,   350.19 MiB/s
[2024-11-20T09:09:17.052Z]     89631.33 IOPS,   350.12 MiB/s
[2024-11-20T09:09:17.984Z]     89648.00 IOPS,   350.19 MiB/s
[2024-11-20T09:09:18.916Z]     89668.50 IOPS,   350.27 MiB/s
[2024-11-20T09:09:20.286Z]     89677.78 IOPS,   350.30 MiB/s
[2024-11-20T09:09:21.219Z]     89670.10 IOPS,   350.27 MiB/s
[2024-11-20T09:09:22.151Z]     89677.82 IOPS,   350.30 MiB/s
[2024-11-20T09:09:23.083Z]     89686.75 IOPS,   350.34 MiB/s
[2024-11-20T09:09:24.016Z]     89697.85 IOPS,   350.38 MiB/s
[2024-11-20T09:09:24.947Z]     89700.43 IOPS,   350.39 MiB/s
[2024-11-20T09:09:26.320Z]     89706.27 IOPS,   350.42 MiB/s
[2024-11-20T09:09:27.253Z]     89707.38 IOPS,   350.42 MiB/s
[2024-11-20T09:09:28.187Z]     89699.71 IOPS,   350.39 MiB/s
[2024-11-20T09:09:29.120Z]     89700.50 IOPS,   350.39 MiB/s
[2024-11-20T09:09:30.052Z]     89702.47 IOPS,   350.40 MiB/s
[2024-11-20T09:09:30.985Z]     89704.60 IOPS,   350.41 MiB/s
[2024-11-20T09:09:32.357Z]     89695.67 IOPS,   350.37 MiB/s
[2024-11-20T09:09:33.291Z]     89705.00 IOPS,   350.41 MiB/s
[2024-11-20T09:09:34.229Z]     89711.00 IOPS,   350.43 MiB/s
[2024-11-20T09:09:35.167Z]     89715.42 IOPS,   350.45 MiB/s
[2024-11-20T09:09:36.104Z]     89708.32 IOPS,   350.42 MiB/s
[2024-11-20T09:09:37.041Z]     89712.96 IOPS,   350.44 MiB/s
[2024-11-20T09:09:37.978Z]     89712.85 IOPS,   350.44 MiB/s
[2024-11-20T09:09:39.353Z]     89714.11 IOPS,   350.45 MiB/s
[2024-11-20T09:09:40.292Z]     89707.45 IOPS,   350.42 MiB/s
[2024-11-20T09:09:40.292Z]     89715.13 IOPS,   350.45 MiB/s
00:12:45.171                                                                                                  Latency(us)
00:12:45.171  
[2024-11-20T09:09:40.292Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:45.171  Job: VirtioScsi0t0 (Core Mask 0x10, workload: randrw, percentage: 50, depth: 256, IO size: 4096)
00:12:45.171  	 VirtioScsi0t0       :      30.01   20094.58      78.49       0.00     0.00   12732.12    1832.58   14563.56
00:12:45.171  Job: VirtioScsi0t1 (Core Mask 0x20, workload: randrw, percentage: 50, depth: 256, IO size: 4096)
00:12:45.171  	 VirtioScsi0t1       :      30.01   20094.25      78.49       0.00     0.00   12732.39    1808.31   14660.65
00:12:45.171  Job: VirtioBlk0 (Core Mask 0x40, workload: randrw, percentage: 50, depth: 256, IO size: 4096)
00:12:45.171  	 VirtioBlk0          :      30.01   49517.74     193.43       0.00     0.00    5164.63    1759.76    6990.51
00:12:45.171  
[2024-11-20T09:09:40.292Z]  ===================================================================================================================
00:12:45.171  
[2024-11-20T09:09:40.292Z]  Total                       :              89706.57     350.42       0.00     0.00    8555.23    1759.76   14660.65
00:12:45.171  {
00:12:45.171    "results": [
00:12:45.171      {
00:12:45.171        "job": "VirtioScsi0t0",
00:12:45.171        "core_mask": "0x10",
00:12:45.171        "workload": "randrw",
00:12:45.171        "percentage": 50,
00:12:45.171        "status": "finished",
00:12:45.171        "queue_depth": 256,
00:12:45.171        "io_size": 4096,
00:12:45.171        "runtime": 30.010387,
00:12:45.171        "iops": 20094.57592133017,
00:12:45.171        "mibps": 78.49443719269598,
00:12:45.171        "io_failed": 0,
00:12:45.171        "io_timeout": 0,
00:12:45.171        "avg_latency_us": 12732.118337020174,
00:12:45.171        "min_latency_us": 1832.5807407407408,
00:12:45.171        "max_latency_us": 14563.555555555555
00:12:45.171      },
00:12:45.171      {
00:12:45.171        "job": "VirtioScsi0t1",
00:12:45.171        "core_mask": "0x20",
00:12:45.171        "workload": "randrw",
00:12:45.171        "percentage": 50,
00:12:45.171        "status": "finished",
00:12:45.171        "queue_depth": 256,
00:12:45.171        "io_size": 4096,
00:12:45.171        "runtime": 30.010227,
00:12:45.171        "iops": 20094.249870219242,
00:12:45.171        "mibps": 78.49316355554392,
00:12:45.171        "io_failed": 0,
00:12:45.171        "io_timeout": 0,
00:12:45.171        "avg_latency_us": 12732.394412083951,
00:12:45.171        "min_latency_us": 1808.3081481481481,
00:12:45.171        "max_latency_us": 14660.645925925926
00:12:45.171      },
00:12:45.171      {
00:12:45.172        "job": "VirtioBlk0",
00:12:45.172        "core_mask": "0x40",
00:12:45.172        "workload": "randrw",
00:12:45.172        "percentage": 50,
00:12:45.172        "status": "finished",
00:12:45.172        "queue_depth": 256,
00:12:45.172        "io_size": 4096,
00:12:45.172        "runtime": 30.005871,
00:12:45.172        "iops": 49517.74271108477,
00:12:45.172        "mibps": 193.4286824651749,
00:12:45.172        "io_failed": 0,
00:12:45.172        "io_timeout": 0,
00:12:45.172        "avg_latency_us": 5164.6268294326765,
00:12:45.172        "min_latency_us": 1759.762962962963,
00:12:45.172        "max_latency_us": 6990.506666666667
00:12:45.172      }
00:12:45.172    ],
00:12:45.172    "core_count": 3
00:12:45.172  }
00:12:45.172   10:09:39 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@52 -- # killprocess 1789533
00:12:45.172   10:09:39 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1789533 ']'
00:12:45.172   10:09:39 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1789533
00:12:45.172    10:09:39 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@959 -- # uname
00:12:45.172   10:09:39 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:12:45.172    10:09:39 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1789533
00:12:45.172   10:09:40 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_4
00:12:45.172   10:09:40 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']'
00:12:45.172   10:09:40 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1789533'
00:12:45.172  killing process with pid 1789533
00:12:45.172   10:09:40 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@973 -- # kill 1789533
00:12:45.172  Received shutdown signal, test time was about 30.000000 seconds
00:12:45.172  
00:12:45.172                                                                                                  Latency(us)
00:12:45.172  
[2024-11-20T09:09:40.293Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:45.172  
[2024-11-20T09:09:40.293Z]  ===================================================================================================================
00:12:45.172  
[2024-11-20T09:09:40.293Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:12:45.172   10:09:40 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@978 -- # wait 1789533
00:12:45.172  [2024-11-20 10:09:40.028583] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 0
00:12:45.172  [2024-11-20 10:09:40.029305] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x14-0x14, len = 1
00:12:45.172  [2024-11-20 10:09:40.029343] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_STATUS with 0x0
00:12:45.172  [2024-11-20 10:09:40.029364] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status f, set status 0
00:12:45.172  [2024-11-20 10:09:40.029378] vfu_virtio.c:1388:vfu_virtio_dev_stop: *DEBUG*: stop vfu.blk
00:12:45.172  [2024-11-20 10:09:40.029406] vfu_virtio.c: 116:virtio_dev_unmap_vq: *DEBUG*: vfu.blk: unmap vq 0
00:12:45.172  [2024-11-20 10:09:40.029425] vfu_virtio.c: 116:virtio_dev_unmap_vq: *DEBUG*: vfu.blk: unmap vq 1
00:12:45.172  [2024-11-20 10:09:40.029440] vfu_virtio.c: 190:vfu_virtio_dev_reset: *DEBUG*: device vfu.blk resetting
00:12:45.172  [2024-11-20 10:09:40.030289] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:12:45.172  [2024-11-20 10:09:40.030324] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x0
00:12:45.172  [2024-11-20 10:09:40.030356] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 0
00:12:45.172  [2024-11-20 10:09:40.031297] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x16-0x17, len = 2
00:12:45.172  [2024-11-20 10:09:40.031322] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_SELECT with 0x0
00:12:45.172  [2024-11-20 10:09:40.032309] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x1C-0x1D, len = 2
00:12:45.172  [2024-11-20 10:09:40.032333] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_ENABLE with 0x0
00:12:45.172  [2024-11-20 10:09:40.032348] vfu_virtio.c: 301:virtio_dev_disable_vq: *DEBUG*: vfu.blk: disable vq 0
00:12:45.172  [2024-11-20 10:09:40.032374] vfu_virtio.c: 305:virtio_dev_disable_vq: *NOTICE*: Queue 0 isn't enabled
00:12:45.172  [2024-11-20 10:09:40.033314] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x16-0x17, len = 2
00:12:45.172  [2024-11-20 10:09:40.033339] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_SELECT with 0x1
00:12:45.172  [2024-11-20 10:09:40.034318] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x1C-0x1D, len = 2
00:12:45.172  [2024-11-20 10:09:40.034342] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_ENABLE with 0x0
00:12:45.172  [2024-11-20 10:09:40.034356] vfu_virtio.c: 301:virtio_dev_disable_vq: *DEBUG*: vfu.blk: disable vq 1
00:12:45.172  [2024-11-20 10:09:40.034372] vfu_virtio.c: 305:virtio_dev_disable_vq: *NOTICE*: Queue 1 isn't enabled
00:12:45.172  [2024-11-20 10:09:40.034439] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /tmp/vfu_devices/vfu.blk
00:12:45.172  [2024-11-20 10:09:40.037123] vfio_user_pci.c:  96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x80000000
00:12:45.172  [2024-11-20 10:09:40.068129] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 0
00:12:45.172  [2024-11-20 10:09:40.068718] vfu_virtio.c:1388:vfu_virtio_dev_stop: *DEBUG*: stop vfu.blk
00:12:45.172  [2024-11-20 10:09:40.068748] vfu_virtio.c:1391:vfu_virtio_dev_stop: *DEBUG*: vfu.blk isn't started
00:12:45.172  [2024-11-20 10:09:40.068762] vfu_virtio.c: 190:vfu_virtio_dev_reset: *DEBUG*: device vfu.blk resetting
00:12:45.172  [2024-11-20 10:09:40.068791] vfu_virtio.c:1416:vfu_virtio_detach_device: *DEBUG*: detach device vfu.blk
00:12:45.172  [2024-11-20 10:09:40.068817] vfu_virtio.c:1388:vfu_virtio_dev_stop: *DEBUG*: stop vfu.blk
00:12:45.172  [2024-11-20 10:09:40.068832] vfu_virtio.c:1391:vfu_virtio_dev_stop: *DEBUG*: vfu.blk isn't started
00:12:45.172  [2024-11-20 10:09:40.069032] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x14-0x14, len = 1
00:12:45.172  [2024-11-20 10:09:40.069073] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_STATUS with 0x0
00:12:45.172  [2024-11-20 10:09:40.069089] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status f, set status 0
00:12:45.172  [2024-11-20 10:09:40.069107] vfu_virtio.c:1388:vfu_virtio_dev_stop: *DEBUG*: stop vfu.scsi
00:12:45.172  [2024-11-20 10:09:40.069133] vfu_virtio.c: 116:virtio_dev_unmap_vq: *DEBUG*: vfu.scsi: unmap vq 0
00:12:45.172  [2024-11-20 10:09:40.069153] vfu_virtio.c: 116:virtio_dev_unmap_vq: *DEBUG*: vfu.scsi: unmap vq 1
00:12:45.172  [2024-11-20 10:09:40.069165] vfu_virtio.c: 116:virtio_dev_unmap_vq: *DEBUG*: vfu.scsi: unmap vq 2
00:12:45.172  [2024-11-20 10:09:40.069180] vfu_virtio.c: 116:virtio_dev_unmap_vq: *DEBUG*: vfu.scsi: unmap vq 3
00:12:45.172  [2024-11-20 10:09:40.069192] vfu_virtio.c: 190:vfu_virtio_dev_reset: *DEBUG*: device vfu.scsi resetting
00:12:45.172  [2024-11-20 10:09:40.070025] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:12:45.172  [2024-11-20 10:09:40.070047] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x0
00:12:45.172  [2024-11-20 10:09:40.070085] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 0
00:12:45.172  [2024-11-20 10:09:40.071025] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:12:45.172  [2024-11-20 10:09:40.071044] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x0
00:12:45.172  [2024-11-20 10:09:40.072037] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:12:45.172  [2024-11-20 10:09:40.072056] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x0
00:12:45.172  [2024-11-20 10:09:40.072073] vfu_virtio.c: 301:virtio_dev_disable_vq: *DEBUG*: vfu.scsi: disable vq 0
00:12:45.172  [2024-11-20 10:09:40.072085] vfu_virtio.c: 305:virtio_dev_disable_vq: *NOTICE*: Queue 0 isn't enabled
00:12:45.172  [2024-11-20 10:09:40.073047] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:12:45.172  [2024-11-20 10:09:40.073065] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x1
00:12:45.172  [2024-11-20 10:09:40.074051] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:12:45.172  [2024-11-20 10:09:40.074070] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x0
00:12:45.172  [2024-11-20 10:09:40.074086] vfu_virtio.c: 301:virtio_dev_disable_vq: *DEBUG*: vfu.scsi: disable vq 1
00:12:45.172  [2024-11-20 10:09:40.074097] vfu_virtio.c: 305:virtio_dev_disable_vq: *NOTICE*: Queue 1 isn't enabled
00:12:45.172  [2024-11-20 10:09:40.075055] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:12:45.172  [2024-11-20 10:09:40.075074] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x2
00:12:45.172  [2024-11-20 10:09:40.076062] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:12:45.172  [2024-11-20 10:09:40.076081] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x0
00:12:45.172  [2024-11-20 10:09:40.076096] vfu_virtio.c: 301:virtio_dev_disable_vq: *DEBUG*: vfu.scsi: disable vq 2
00:12:45.172  [2024-11-20 10:09:40.076107] vfu_virtio.c: 305:virtio_dev_disable_vq: *NOTICE*: Queue 2 isn't enabled
00:12:45.172  [2024-11-20 10:09:40.077074] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:12:45.172  [2024-11-20 10:09:40.077093] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x3
00:12:45.172  [2024-11-20 10:09:40.078087] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:12:45.172  [2024-11-20 10:09:40.078105] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x0
00:12:45.172  [2024-11-20 10:09:40.078123] vfu_virtio.c: 301:virtio_dev_disable_vq: *DEBUG*: vfu.scsi: disable vq 3
00:12:45.172  [2024-11-20 10:09:40.078134] vfu_virtio.c: 305:virtio_dev_disable_vq: *NOTICE*: Queue 3 isn't enabled
00:12:45.172  [2024-11-20 10:09:40.078197] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /tmp/vfu_devices/vfu.scsi
00:12:45.172  [2024-11-20 10:09:40.080779] vfio_user_pci.c:  96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x80000000
00:12:45.172  [2024-11-20 10:09:40.111343] vfu_virtio.c:1388:vfu_virtio_dev_stop: *DEBUG*: stop vfu.scsi
00:12:45.172  [2024-11-20 10:09:40.111368] vfu_virtio.c:1391:vfu_virtio_dev_stop: *DEBUG*: vfu.scsi isn't started
00:12:45.172  [2024-11-20 10:09:40.111385] vfu_virtio.c: 190:vfu_virtio_dev_reset: *DEBUG*: device vfu.scsi resetting
00:12:45.172  [2024-11-20 10:09:40.111410] vfu_virtio.c:1416:vfu_virtio_detach_device: *DEBUG*: detach device vfu.scsi
00:12:45.172  [2024-11-20 10:09:40.111428] vfu_virtio.c:1388:vfu_virtio_dev_stop: *DEBUG*: stop vfu.scsi
00:12:45.172  [2024-11-20 10:09:40.111440] vfu_virtio.c:1391:vfu_virtio_dev_stop: *DEBUG*: vfu.scsi isn't started
00:12:49.361   10:09:44 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@53 -- # trap - SIGINT SIGTERM EXIT
00:12:49.361   10:09:44 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_virtio_delete_endpoint vfu.blk
00:12:49.361  [2024-11-20 10:09:44.369270] tgt_endpoint.c: 701:spdk_vfu_delete_endpoint: *NOTICE*: Destruct endpoint vfu.blk
00:12:49.361   10:09:44 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@57 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_virtio_delete_endpoint vfu.scsi
00:12:49.620  [2024-11-20 10:09:44.678397] tgt_endpoint.c: 701:spdk_vfu_delete_endpoint: *NOTICE*: Destruct endpoint vfu.scsi
00:12:49.620   10:09:44 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@59 -- # killprocess 1789009
00:12:49.620   10:09:44 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1789009 ']'
00:12:49.620   10:09:44 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1789009
00:12:49.620    10:09:44 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@959 -- # uname
00:12:49.620   10:09:44 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:12:49.620    10:09:44 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1789009
00:12:49.620   10:09:44 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:12:49.620   10:09:44 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:12:49.620   10:09:44 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1789009'
00:12:49.620  killing process with pid 1789009
00:12:49.620   10:09:44 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@973 -- # kill 1789009
00:12:49.620   10:09:44 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@978 -- # wait 1789009
00:12:52.970  
00:12:52.970  real	0m44.168s
00:12:52.970  user	5m7.652s
00:12:52.970  sys	0m2.839s
00:12:52.970   10:09:47 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:52.970   10:09:47 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:12:52.970  ************************************
00:12:52.970  END TEST vfio_user_virtio_bdevperf
00:12:52.970  ************************************
00:12:52.970   10:09:47 vfio_user_qemu -- vfio_user/vfio_user.sh@20 -- # [[ y == y ]]
00:12:52.970   10:09:47 vfio_user_qemu -- vfio_user/vfio_user.sh@21 -- # run_test vfio_user_virtio_fs_fio /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_fs.sh
00:12:52.970   10:09:47 vfio_user_qemu -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:52.970   10:09:47 vfio_user_qemu -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:52.970   10:09:47 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x
00:12:52.970  ************************************
00:12:52.970  START TEST vfio_user_virtio_fs_fio
00:12:52.971  ************************************
00:12:52.971   10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_fs.sh
00:12:52.971  * Looking for test storage...
00:12:52.971  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:12:52.971    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:12:52.971     10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1693 -- # lcov --version
00:12:52.971     10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:12:52.971    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:12:52.971    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:52.971    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:52.971    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:52.971    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@336 -- # IFS=.-:
00:12:52.971    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@336 -- # read -ra ver1
00:12:52.971    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@337 -- # IFS=.-:
00:12:52.971    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@337 -- # read -ra ver2
00:12:52.971    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@338 -- # local 'op=<'
00:12:52.971    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@340 -- # ver1_l=2
00:12:52.971    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@341 -- # ver2_l=1
00:12:52.971    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:52.971    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@344 -- # case "$op" in
00:12:52.971    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@345 -- # : 1
00:12:52.971    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:52.971    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:52.971     10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@365 -- # decimal 1
00:12:52.971     10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@353 -- # local d=1
00:12:52.971     10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:52.971     10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@355 -- # echo 1
00:12:52.971    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@365 -- # ver1[v]=1
00:12:52.971     10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@366 -- # decimal 2
00:12:52.971     10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@353 -- # local d=2
00:12:52.971     10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:52.971     10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@355 -- # echo 2
00:12:52.971    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@366 -- # ver2[v]=2
00:12:52.971    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:52.971    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:52.971    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@368 -- # return 0
00:12:52.971    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:52.971    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:12:52.971  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:52.971  		--rc genhtml_branch_coverage=1
00:12:52.971  		--rc genhtml_function_coverage=1
00:12:52.971  		--rc genhtml_legend=1
00:12:52.971  		--rc geninfo_all_blocks=1
00:12:52.971  		--rc geninfo_unexecuted_blocks=1
00:12:52.971  		
00:12:52.971  		'
00:12:52.971    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:12:52.971  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:52.971  		--rc genhtml_branch_coverage=1
00:12:52.971  		--rc genhtml_function_coverage=1
00:12:52.971  		--rc genhtml_legend=1
00:12:52.971  		--rc geninfo_all_blocks=1
00:12:52.971  		--rc geninfo_unexecuted_blocks=1
00:12:52.971  		
00:12:52.971  		'
00:12:52.971    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:12:52.971  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:52.971  		--rc genhtml_branch_coverage=1
00:12:52.971  		--rc genhtml_function_coverage=1
00:12:52.971  		--rc genhtml_legend=1
00:12:52.971  		--rc geninfo_all_blocks=1
00:12:52.971  		--rc geninfo_unexecuted_blocks=1
00:12:52.971  		
00:12:52.971  		'
00:12:52.971    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:12:52.971  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:52.971  		--rc genhtml_branch_coverage=1
00:12:52.971  		--rc genhtml_function_coverage=1
00:12:52.971  		--rc genhtml_legend=1
00:12:52.971  		--rc geninfo_all_blocks=1
00:12:52.971  		--rc geninfo_unexecuted_blocks=1
00:12:52.971  		
00:12:52.971  		'
00:12:52.971   10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh
00:12:52.971    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/common.sh@6 -- # : 128
00:12:52.971    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/common.sh@7 -- # : 512
00:12:52.971    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:12:52.971     10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@6 -- # : false
00:12:52.971     10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@7 -- # : /root/vhost_test
00:12:52.971     10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@8 -- # : /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:12:52.971     10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@9 -- # : qemu-img
00:12:52.971      10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:12:52.971     10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:12:52.971     10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:12:52.971     10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:12:52.971     10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:12:52.971     10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:12:52.971     10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:12:52.971       10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_fs.sh
00:12:52.971      10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:12:52.971     10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:12:52.971     10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:12:52.971     10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:12:52.971     10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:12:52.971     10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:12:52.971     10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:12:52.971      10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:12:52.971      10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@2 -- # vhost_0_main_core=0
00:12:52.971      10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:12:52.971      10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:12:52.971      10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:12:52.971      10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:12:52.971      10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:12:52.971      10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:12:52.971      10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:12:52.971      10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:12:52.971      10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:12:52.971      10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:12:52.971      10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:12:52.971      10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:12:52.971      10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:12:52.971      10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:12:52.971      10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:12:52.971      10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:12:52.971      10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:12:52.971      10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:12:52.971      10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:12:52.971      10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:12:52.971      10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:12:52.971      10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:12:52.971      10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:12:52.971      10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:12:52.971     10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:12:52.972      10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:12:52.972      10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:12:52.972      10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:12:52.972      10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:12:52.972      10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:12:52.972      10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:12:52.972       10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:12:52.972        10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/cgroups.sh@244 -- # check_cgroup
00:12:52.972        10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:12:52.972        10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:12:52.972        10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/cgroups.sh@10 -- # echo 2
00:12:52.972       10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:12:52.972    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:12:52.972    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]]
00:12:52.972    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:12:52.972   10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/common.sh
00:12:52.972   10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@12 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/autotest.config
00:12:52.972    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@1 -- # vhost_0_reactor_mask='[0-3]'
00:12:52.972    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@2 -- # vhost_0_main_core=0
00:12:52.972    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@4 -- # VM_0_qemu_mask=4-5
00:12:52.972    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:12:52.972    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@7 -- # VM_1_qemu_mask=6-7
00:12:52.972    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:12:52.972    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@10 -- # VM_2_qemu_mask=8-9
00:12:52.972    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:12:52.972    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@14 -- # get_vhost_dir 0
00:12:52.972    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@105 -- # local vhost_name=0
00:12:52.972    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:12:52.972    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:12:52.972   10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@14 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:12:52.972   10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@16 -- # vhosttestinit
00:12:52.972   10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@37 -- # '[' '' == iso ']'
00:12:52.972   10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]]
00:12:52.972   10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:12:52.972   10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:12:52.972   10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@18 -- # trap 'error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:12:52.972   10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@20 -- # vfu_tgt_run 0
00:12:52.972   10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@6 -- # local vhost_name=0
00:12:52.972   10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@7 -- # local vfio_user_dir vfu_pid_file rpc_py
00:12:52.972    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@9 -- # get_vhost_dir 0
00:12:52.972    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@105 -- # local vhost_name=0
00:12:52.972    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:12:52.972    10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:12:52.972   10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@9 -- # vfio_user_dir=/root/vhost_test/vhost/0
00:12:52.972   10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@10 -- # vfu_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:12:52.972   10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@11 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:12:52.972   10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@13 -- # mkdir -p /root/vhost_test/vhost/0
00:12:52.972   10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@15 -- # timing_enter vfu_tgt_start
00:12:52.972   10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:12:52.972   10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:12:52.972   10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@17 -- # vfupid=1794861
00:12:52.972   10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@16 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -r /root/vhost_test/vhost/0/rpc.sock -m 0xf -s 512
00:12:52.972   10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@18 -- # echo 1794861
00:12:52.972   10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@20 -- # echo 'Process pid: 1794861'
00:12:52.972  Process pid: 1794861
00:12:52.972   10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@21 -- # echo 'waiting for app to run...'
00:12:52.972  waiting for app to run...
00:12:52.972   10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@22 -- # waitforlisten 1794861 /root/vhost_test/vhost/0/rpc.sock
00:12:52.972   10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@835 -- # '[' -z 1794861 ']'
00:12:52.972   10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@839 -- # local rpc_addr=/root/vhost_test/vhost/0/rpc.sock
00:12:52.972   10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@840 -- # local max_retries=100
00:12:52.972   10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...'
00:12:52.972  Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...
00:12:52.972   10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@844 -- # xtrace_disable
00:12:52.972   10:09:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:12:52.972  [2024-11-20 10:09:47.772436] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:12:52.972  [2024-11-20 10:09:47.772582] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xf -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1794861 ]
00:12:52.972  EAL: No free 2048 kB hugepages reported on node 1
00:12:52.972  [2024-11-20 10:09:48.039974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:12:53.231  [2024-11-20 10:09:48.146809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:12:53.231  [2024-11-20 10:09:48.146872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:12:53.231  [2024-11-20 10:09:48.146911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:53.231  [2024-11-20 10:09:48.146921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:12:53.797   10:09:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:12:53.797   10:09:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@868 -- # return 0
00:12:53.797   10:09:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@24 -- # timing_exit vfu_tgt_start
00:12:53.797   10:09:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:12:53.797   10:09:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:12:54.055   10:09:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@22 -- # vfu_vm_dir=/root/vhost_test/vms/vfu_tgt
00:12:54.055   10:09:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@23 -- # rm -rf /root/vhost_test/vms/vfu_tgt
00:12:54.055   10:09:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@24 -- # mkdir -p /root/vhost_test/vms/vfu_tgt
00:12:54.055   10:09:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@27 -- # disk_no=1
00:12:54.055   10:09:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@28 -- # vm_num=1
00:12:54.055   10:09:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@29 -- # job_file=default_fsdev.job
00:12:54.055   10:09:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@30 -- # be_virtiofs_dir=/tmp/vfio-test.1
00:12:54.055   10:09:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@31 -- # vm_virtiofs_dir=/tmp/virtiofs.1
00:12:54.055   10:09:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@33 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_tgt_set_base_path /root/vhost_test/vms/vfu_tgt
00:12:54.313   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@35 -- # rm -rf /tmp/vfio-test.1
00:12:54.314   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@36 -- # mkdir -p /tmp/vfio-test.1
00:12:54.314    10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@39 -- # mktemp --tmpdir=/tmp/vfio-test.1
00:12:54.314   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@39 -- # tmpfile=/tmp/vfio-test.1/tmp.Z5Cy0VWhRz
00:12:54.314   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@41 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock fsdev_aio_create aio.1 /tmp/vfio-test.1
00:12:54.571  aio.1
00:12:54.571   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@42 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_virtio_create_fs_endpoint virtio.1 --fsdev-name aio.1 --tag vfu_test.1 --num-queues=2 --qsize=512 --packed-ring
00:12:54.828   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@45 -- # vm_setup --disk-type=vfio_user_virtio --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1
00:12:54.828   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@518 -- # xtrace_disable
00:12:54.828   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:12:54.829  WARN: removing existing VM in '/root/vhost_test/vms/1'
00:12:54.829  INFO: Creating new VM in /root/vhost_test/vms/1
00:12:54.829  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:12:54.829  INFO: TASK MASK: 6-7
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@671 -- # local node_num=0
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@672 -- # local boot_disk_present=false
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:12:55.088  INFO: NUMA NODE: 0
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@677 -- # [[ -n '' ]]
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@686 -- # [[ -z '' ]]
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@701 -- # IFS=,
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@701 -- # read -r disk disk_type _
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@702 -- # [[ -z '' ]]
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@702 -- # disk_type=vfio_user_virtio
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@704 -- # case $disk_type in
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@766 -- # notice 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:12:55.088  INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@767 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/vfu_tgt/virtio.$disk")
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@768 -- # [[ 1 == '' ]]
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@780 -- # [[ -n '' ]]
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@785 -- # (( 0 ))
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:12:55.088  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@787 -- # cat
00:12:55.088    10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/vfu_tgt/virtio.1
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@827 -- # echo 10100
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@828 -- # echo 10101
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@829 -- # echo 10102
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@832 -- # [[ -z '' ]]
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@834 -- # echo 10104
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@835 -- # echo 101
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@837 -- # [[ -z '' ]]
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@838 -- # [[ -z '' ]]
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@46 -- # vm_run 1
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@843 -- # local run_all=false
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@844 -- # local vms_to_run=
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@846 -- # getopts a-: optchar
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@856 -- # false
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@859 -- # shift 0
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@860 -- # for vm in "$@"
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@861 -- # vm_num_is_valid 1
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:55.088   10:09:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:12:55.088   10:09:50 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:12:55.088   10:09:50 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:12:55.088   10:09:50 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:12:55.088   10:09:50 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@871 -- # vm_is_running 1
00:12:55.088   10:09:50 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:12:55.088   10:09:50 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:55.088   10:09:50 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:12:55.088   10:09:50 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:12:55.088   10:09:50 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:12:55.088   10:09:50 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@373 -- # return 1
00:12:55.088   10:09:50 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:12:55.088   10:09:50 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:12:55.088   10:09:50 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:12:55.088   10:09:50 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:12:55.088   10:09:50 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:12:55.088   10:09:50 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:55.088   10:09:50 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:12:55.088   10:09:50 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:12:55.088  INFO: running /root/vhost_test/vms/1/run.sh
00:12:55.088   10:09:50 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:12:55.088  Running VM in /root/vhost_test/vms/1
00:12:55.347  [2024-11-20 10:09:50.246070] tgt_endpoint.c: 167:tgt_accept_poller: *NOTICE*: /root/vhost_test/vms/vfu_tgt/virtio.1: attached successfully
00:12:55.347  Waiting for QEMU pid file
00:12:56.282  === qemu.log ===
00:12:56.283  === qemu.log ===
00:12:56.283   10:09:51 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@47 -- # vm_wait_for_boot 60 1
00:12:56.283   10:09:51 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@913 -- # assert_number 60
00:12:56.283   10:09:51 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:12:56.283   10:09:51 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@281 -- # return 0
00:12:56.283   10:09:51 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@915 -- # xtrace_disable
00:12:56.283   10:09:51 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:12:56.283  INFO: Waiting for VMs to boot
00:12:56.283  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:13:18.228  
00:13:18.228  INFO: VM1 ready
00:13:18.228  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:13:18.228  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:13:18.228  INFO: all VMs ready
00:13:18.228   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@973 -- # return 0
00:13:18.228   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@49 -- # vm_exec 1 'mkdir /tmp/virtiofs.1'
00:13:18.228   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:13:18.228   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:18.228   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:13:18.228   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:13:18.228   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:13:18.228    10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:13:18.228    10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:13:18.228    10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:18.228    10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:13:18.228    10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:13:18.228    10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:13:18.228   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'mkdir /tmp/virtiofs.1'
00:13:18.228  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:13:18.228   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@50 -- # vm_exec 1 'mount -t virtiofs vfu_test.1 /tmp/virtiofs.1'
00:13:18.228   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:13:18.228   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:18.228   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:13:18.228   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:13:18.228   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:13:18.228    10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:13:18.228    10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:13:18.228    10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:18.228    10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:13:18.228    10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:13:18.228    10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:13:18.228   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'mount -t virtiofs vfu_test.1 /tmp/virtiofs.1'
00:13:18.228  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:13:18.487    10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@52 -- # basename /tmp/vfio-test.1/tmp.Z5Cy0VWhRz
00:13:18.487   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@52 -- # vm_exec 1 'ls /tmp/virtiofs.1/tmp.Z5Cy0VWhRz'
00:13:18.487   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:13:18.487   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:18.487   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:13:18.487   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:13:18.487   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:13:18.487    10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:13:18.487    10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:13:18.487    10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:18.487    10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:13:18.487    10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:13:18.487    10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:13:18.487   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'ls /tmp/virtiofs.1/tmp.Z5Cy0VWhRz'
00:13:18.487  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:13:18.487  /tmp/virtiofs.1/tmp.Z5Cy0VWhRz
00:13:18.487   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@53 -- # vm_start_fio_server --fio-bin=/usr/src/fio-static/fio 1
00:13:18.487   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@977 -- # local OPTIND optchar
00:13:18.487   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@978 -- # local readonly=
00:13:18.487   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@979 -- # local fio_bin=
00:13:18.487   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:13:18.487   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@981 -- # case "$optchar" in
00:13:18.487   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@983 -- # case "$OPTARG" in
00:13:18.487   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@984 -- # local fio_bin=/usr/src/fio-static/fio
00:13:18.487   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:13:18.487   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@993 -- # shift 1
00:13:18.488   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@994 -- # for vm_num in "$@"
00:13:18.488   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@995 -- # notice 'Starting fio server on VM1'
00:13:18.488   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'Starting fio server on VM1'
00:13:18.488   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:13:18.488   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:13:18.488   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:13:18.488   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:13:18.488   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:13:18.488   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Starting fio server on VM1'
00:13:18.488  INFO: Starting fio server on VM1
00:13:18.488   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@996 -- # [[ /usr/src/fio-static/fio != '' ]]
00:13:18.488   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@997 -- # vm_exec 1 'cat > /root/fio; chmod +x /root/fio'
00:13:18.488   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:13:18.488   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:18.488   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:13:18.488   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:13:18.488   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:13:18.488    10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:13:18.488    10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:13:18.488    10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:18.488    10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:13:18.488    10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:13:18.488    10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:13:18.488   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/fio; chmod +x /root/fio'
00:13:18.746  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:13:19.005   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@998 -- # vm_exec 1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:13:19.005   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:13:19.005   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:19.005   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:13:19.005   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:13:19.005   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:13:19.005    10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:13:19.005    10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:13:19.005    10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:19.005    10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:13:19.005    10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:13:19.005    10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:13:19.005   10:10:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:13:19.005  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@54 -- # run_fio --fio-bin=/usr/src/fio-static/fio --job-file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_fsdev.job --out=/root/vhost_test/fio_results --vm=1:/tmp/virtiofs.1/test
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1053 -- # local arg
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1054 -- # local job_file=
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1055 -- # local fio_bin=
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1056 -- # vms=()
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1056 -- # local vms
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1057 -- # local out=
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1058 -- # local vm
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1059 -- # local run_server_mode=true
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1060 -- # local run_plugin_mode=false
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1061 -- # local fio_start_cmd
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1062 -- # local fio_output_format=normal
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1063 -- # local fio_gtod_reduce=false
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1064 -- # local wait_for_fio=true
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1069 -- # local fio_bin=/usr/src/fio-static/fio
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1068 -- # local job_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_fsdev.job
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1072 -- # local out=/root/vhost_test/fio_results
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1073 -- # mkdir -p /root/vhost_test/fio_results
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1070 -- # vms+=("${arg#*=}")
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1092 -- # [[ -n /usr/src/fio-static/fio ]]
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1092 -- # [[ ! -r /usr/src/fio-static/fio ]]
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1097 -- # [[ -z /usr/src/fio-static/fio ]]
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1101 -- # [[ ! -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_fsdev.job ]]
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1106 -- # fio_start_cmd='/usr/src/fio-static/fio --eta=never '
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1108 -- # local job_fname
00:13:19.005    10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1109 -- # basename /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_fsdev.job
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1109 -- # job_fname=default_fsdev.job
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1110 -- # log_fname=default_fsdev.log
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1111 -- # fio_start_cmd+=' --output=/root/vhost_test/fio_results/default_fsdev.log --output-format=normal '
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1114 -- # for vm in "${vms[@]}"
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1115 -- # local vm_num=1
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1116 -- # local vmdisks=/tmp/virtiofs.1/test
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1118 -- # sed 's@filename=@filename=/tmp/virtiofs.1/test@;s@description=\(.*\)@description=\1 (VM=1)@' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_fsdev.job
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1119 -- # vm_exec 1 'cat > /root/default_fsdev.job'
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:13:19.005    10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:13:19.005    10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:13:19.005    10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:19.005    10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:13:19.005    10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:13:19.005    10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:13:19.005   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/default_fsdev.job'
00:13:19.005  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:13:19.264   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1121 -- # false
00:13:19.264   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1125 -- # vm_exec 1 cat /root/default_fsdev.job
00:13:19.264   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:13:19.264   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:19.264   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:13:19.264   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:13:19.264   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:13:19.264    10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:13:19.264    10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:13:19.264    10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:19.264    10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:13:19.264    10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:13:19.264    10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:13:19.264   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 cat /root/default_fsdev.job
00:13:19.264  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:13:19.264  [global]
00:13:19.264  blocksize=4k
00:13:19.264  iodepth=512
00:13:19.264  ioengine=libaio
00:13:19.264  size=1G
00:13:19.264  group_reporting
00:13:19.264  thread
00:13:19.264  numjobs=1
00:13:19.264  direct=1
00:13:19.264  invalidate=1
00:13:19.264  rw=randrw
00:13:19.264  do_verify=1
00:13:19.264  filename=/tmp/virtiofs.1/test
00:13:19.264  [job0]
00:13:19.264   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1127 -- # true
00:13:19.264    10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1128 -- # vm_fio_socket 1
00:13:19.264    10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@326 -- # vm_num_is_valid 1
00:13:19.264    10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:19.264    10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:13:19.264    10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@327 -- # local vm_dir=/root/vhost_test/vms/1
00:13:19.264    10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@329 -- # cat /root/vhost_test/vms/1/fio_socket
00:13:19.264   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1128 -- # fio_start_cmd+='--client=127.0.0.1,10101 --remote-config /root/default_fsdev.job '
00:13:19.264   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1131 -- # true
00:13:19.264   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1147 -- # true
00:13:19.264   10:10:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1161 -- # /usr/src/fio-static/fio --eta=never --output=/root/vhost_test/fio_results/default_fsdev.log --output-format=normal --client=127.0.0.1,10101 --remote-config /root/default_fsdev.job
00:13:45.819   10:10:38 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1162 -- # sleep 1
00:13:45.820   10:10:39 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1164 -- # [[ normal == \j\s\o\n ]]
00:13:45.820   10:10:39 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1172 -- # [[ ! -n '' ]]
00:13:45.820   10:10:39 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1173 -- # cat /root/vhost_test/fio_results/default_fsdev.log
00:13:45.820  hostname=vhostfedora-cloud-23052, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-3.35, flags=1
00:13:45.820  <vhostfedora-cloud-23052> job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=512
00:13:45.820  <vhostfedora-cloud-23052> Starting 1 thread
00:13:45.820  <vhostfedora-cloud-23052> job0: Laying out IO file (1 file / 1024MiB)
00:13:45.820  <vhostfedora-cloud-23052> 
00:13:45.820  job0: (groupid=0, jobs=1): err= 0: pid=968: Wed Nov 20 10:10:38 2024
00:13:45.820    read: IOPS=29.9k, BW=117MiB/s (123MB/s)(512MiB/4381msec)
00:13:45.820      slat (usec): min=2, max=154, avg= 3.23, stdev= 2.38
00:13:45.820      clat (usec): min=2195, max=16539, avg=8587.73, stdev=358.53
00:13:45.820       lat (usec): min=2198, max=16542, avg=8590.96, stdev=358.54
00:13:45.820      clat percentiles (usec):
00:13:45.820       |  1.00th=[ 8291],  5.00th=[ 8356], 10.00th=[ 8455], 20.00th=[ 8455],
00:13:45.820       | 30.00th=[ 8586], 40.00th=[ 8586], 50.00th=[ 8586], 60.00th=[ 8586],
00:13:45.820       | 70.00th=[ 8717], 80.00th=[ 8717], 90.00th=[ 8717], 95.00th=[ 8848],
00:13:45.820       | 99.00th=[ 8848], 99.50th=[ 9110], 99.90th=[12387], 99.95th=[14877],
00:13:45.820       | 99.99th=[16450]
00:13:45.820     bw (  KiB/s): min=118544, max=121768, per=100.00%, avg=119793.00, stdev=1004.74, samples=8
00:13:45.820     iops        : min=29636, max=30442, avg=29948.25, stdev=251.19, samples=8
00:13:45.820    write: IOPS=29.9k, BW=117MiB/s (123MB/s)(512MiB/4381msec); 0 zone resets
00:13:45.820      slat (usec): min=2, max=194, avg= 3.75, stdev= 2.62
00:13:45.820      clat (usec): min=2000, max=16544, avg=8510.32, stdev=348.84
00:13:45.820       lat (usec): min=2003, max=16548, avg=8514.06, stdev=348.85
00:13:45.820      clat percentiles (usec):
00:13:45.820       |  1.00th=[ 8225],  5.00th=[ 8291], 10.00th=[ 8356], 20.00th=[ 8455],
00:13:45.820       | 30.00th=[ 8455], 40.00th=[ 8455], 50.00th=[ 8455], 60.00th=[ 8586],
00:13:45.820       | 70.00th=[ 8586], 80.00th=[ 8586], 90.00th=[ 8717], 95.00th=[ 8717],
00:13:45.820       | 99.00th=[ 8848], 99.50th=[ 8979], 99.90th=[12387], 99.95th=[14353],
00:13:45.820       | 99.99th=[16450]
00:13:45.820     bw (  KiB/s): min=118392, max=120168, per=99.91%, avg=119600.00, stdev=569.95, samples=8
00:13:45.820     iops        : min=29598, max=30042, avg=29900.00, stdev=142.49, samples=8
00:13:45.820    lat (msec)   : 4=0.10%, 10=99.59%, 20=0.31%
00:13:45.820    cpu          : usr=10.62%, sys=21.87%, ctx=9839, majf=0, minf=7
00:13:45.820    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
00:13:45.820       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:13:45.820       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:13:45.820       issued rwts: total=131040,131104,0,0 short=0,0,0,0 dropped=0,0,0,0
00:13:45.820       latency   : target=0, window=0, percentile=100.00%, depth=512
00:13:45.820  
00:13:45.820  Run status group 0 (all jobs):
00:13:45.820     READ: bw=117MiB/s (123MB/s), 117MiB/s-117MiB/s (123MB/s-123MB/s), io=512MiB (537MB), run=4381-4381msec
00:13:45.820    WRITE: bw=117MiB/s (123MB/s), 117MiB/s-117MiB/s (123MB/s-123MB/s), io=512MiB (537MB), run=4381-4381msec
00:13:45.820   10:10:39 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@55 -- # vm_exec 1 'umount /tmp/virtiofs.1'
00:13:45.820   10:10:39 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:13:45.820   10:10:39 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:45.820   10:10:39 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:13:45.820   10:10:39 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:13:45.820   10:10:39 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:13:45.820    10:10:39 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:13:45.820    10:10:39 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:13:45.820    10:10:39 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:45.820    10:10:39 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:13:45.820    10:10:39 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:13:45.820    10:10:39 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:13:45.820   10:10:39 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'umount /tmp/virtiofs.1'
00:13:45.820  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@58 -- # notice 'Shutting down virtual machine...'
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine...'
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine...'
00:13:45.820  INFO: Shutting down virtual machine...
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@59 -- # vm_shutdown_all
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:13:45.820    10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@489 -- # vm_list_all
00:13:45.820    10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@466 -- # vms=()
00:13:45.820    10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@466 -- # local vms
00:13:45.820    10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:13:45.820    10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:13:45.820    10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@492 -- # vm_shutdown 1
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@424 -- # vm_is_running 1
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@376 -- # local vm_pid
00:13:45.820    10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@377 -- # vm_pid=1795184
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@379 -- # /bin/kill -0 1795184
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@380 -- # return 0
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:13:45.820  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@432 -- # set +e
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:13:45.820   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:45.821   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:13:45.821   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:13:45.821   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:13:45.821    10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:13:45.821    10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:13:45.821    10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:45.821    10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:13:45.821    10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:13:45.821    10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:13:45.821   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:13:45.821  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:13:45.821   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:13:45.821   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:13:45.821   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:13:45.821   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:13:45.821   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:13:45.821   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:13:45.821   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:13:45.821   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:13:45.821  INFO: VM1 is shutting down - wait a while to complete
00:13:45.821   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@435 -- # set -e
00:13:45.821   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:13:45.821   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:13:45.821   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:13:45.821   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:13:45.821   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:13:45.821   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:13:45.821   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:13:45.821   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:13:45.821  INFO: Waiting for VMs to shutdown...
00:13:45.821   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:13:45.821   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:13:45.821   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@498 -- # vm_is_running 1
00:13:45.821   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:13:45.821   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:45.821   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:13:45.821   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:13:45.821   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:13:45.821   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@376 -- # local vm_pid
00:13:45.821    10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:13:45.821   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@377 -- # vm_pid=1795184
00:13:45.821   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@379 -- # /bin/kill -0 1795184
00:13:45.821   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@380 -- # return 0
00:13:45.821   10:10:40 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@500 -- # sleep 1
00:13:46.390   10:10:41 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:13:46.390   10:10:41 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:13:46.390   10:10:41 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@498 -- # vm_is_running 1
00:13:46.390   10:10:41 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:13:46.390   10:10:41 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:46.390   10:10:41 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:13:46.390   10:10:41 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:13:46.390   10:10:41 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:13:46.390   10:10:41 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@373 -- # return 1
00:13:46.390   10:10:41 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:13:46.390   10:10:41 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@500 -- # sleep 1
00:13:47.328   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:13:47.328   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:13:47.328   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:13:47.328   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:13:47.328   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:13:47.328   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:13:47.328   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:13:47.328   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:13:47.328   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:13:47.329   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:13:47.329  INFO: All VMs successfully shut down
00:13:47.329   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@505 -- # return 0
00:13:47.329   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@61 -- # vhost_kill 0
00:13:47.329   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@202 -- # local rc=0
00:13:47.329   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@203 -- # local vhost_name=0
00:13:47.329   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@205 -- # [[ -z 0 ]]
00:13:47.329   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@210 -- # local vhost_dir
00:13:47.329    10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@211 -- # get_vhost_dir 0
00:13:47.329    10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@105 -- # local vhost_name=0
00:13:47.329    10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:13:47.329    10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:13:47.329   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0
00:13:47.329   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:13:47.329   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]]
00:13:47.329   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@219 -- # timing_enter vhost_kill
00:13:47.329   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:13:47.329   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:13:47.329   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@220 -- # local vhost_pid
00:13:47.329    10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@221 -- # cat /root/vhost_test/vhost/0/vhost.pid
00:13:47.329   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@221 -- # vhost_pid=1794861
00:13:47.329   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@222 -- # notice 'killing vhost (PID 1794861) app'
00:13:47.329   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'killing vhost (PID 1794861) app'
00:13:47.329   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:13:47.329   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:13:47.329   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:13:47.329   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:13:47.329   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:13:47.329   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: killing vhost (PID 1794861) app'
00:13:47.329  INFO: killing vhost (PID 1794861) app
00:13:47.329   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@224 -- # kill -INT 1794861
00:13:47.329   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@225 -- # notice 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:13:47.329   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:13:47.329   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:13:47.329   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:13:47.329   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:13:47.329   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:13:47.329   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:13:47.329   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: sent SIGINT to vhost app - waiting 60 seconds to exit'
00:13:47.329  INFO: sent SIGINT to vhost app - waiting 60 seconds to exit
00:13:47.329   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i = 0 ))
00:13:47.329   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i < 60 ))
00:13:47.329   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@227 -- # kill -0 1794861
00:13:47.329   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@228 -- # echo .
00:13:47.329  .
00:13:47.329   10:10:42 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@229 -- # sleep 1
00:13:48.267  [2024-11-20 10:10:43.173750] vfu_virtio_fs.c: 301:_vfu_virtio_fs_fuse_dispatcher_delete_cpl: *NOTICE*: FUSE dispatcher deleted
00:13:48.267   10:10:43 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i++ ))
00:13:48.267   10:10:43 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i < 60 ))
00:13:48.267   10:10:43 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@227 -- # kill -0 1794861
00:13:48.267   10:10:43 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@228 -- # echo .
00:13:48.267  .
00:13:48.267   10:10:43 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@229 -- # sleep 1
00:13:49.198   10:10:44 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i++ ))
00:13:49.198   10:10:44 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i < 60 ))
00:13:49.198   10:10:44 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@227 -- # kill -0 1794861
00:13:49.198   10:10:44 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@228 -- # echo .
00:13:49.198  .
00:13:49.198   10:10:44 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@229 -- # sleep 1
00:13:50.135   10:10:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i++ ))
00:13:50.135   10:10:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i < 60 ))
00:13:50.135   10:10:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@227 -- # kill -0 1794861
00:13:50.135  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 227: kill: (1794861) - No such process
00:13:50.135   10:10:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@231 -- # break
00:13:50.135   10:10:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@234 -- # kill -0 1794861
00:13:50.135  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 234: kill: (1794861) - No such process
00:13:50.135   10:10:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@239 -- # kill -0 1794861
00:13:50.135  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 239: kill: (1794861) - No such process
00:13:50.135   10:10:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@245 -- # is_pid_child 1794861
00:13:50.135   10:10:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1668 -- # local pid=1794861 _pid
00:13:50.135   10:10:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1670 -- # read -r _pid
00:13:50.135    10:10:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1667 -- # jobs -pr
00:13:50.135   10:10:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1671 -- # (( pid == _pid ))
00:13:50.135   10:10:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1670 -- # read -r _pid
00:13:50.135   10:10:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1674 -- # return 1
00:13:50.135   10:10:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@257 -- # timing_exit vhost_kill
00:13:50.135   10:10:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:13:50.135   10:10:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:13:50.395   10:10:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@259 -- # rm -rf /root/vhost_test/vhost/0
00:13:50.395   10:10:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@261 -- # return 0
00:13:50.395   10:10:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@63 -- # vhosttestfini
00:13:50.395   10:10:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@54 -- # '[' '' == iso ']'
00:13:50.395  
00:13:50.395  real	0m57.789s
00:13:50.395  user	3m40.999s
00:13:50.395  sys	0m3.776s
00:13:50.395   10:10:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:50.395   10:10:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:13:50.395  ************************************
00:13:50.395  END TEST vfio_user_virtio_fs_fio
00:13:50.395  ************************************
00:13:50.395   10:10:45 vfio_user_qemu -- vfio_user/vfio_user.sh@26 -- # vhosttestfini
00:13:50.395   10:10:45 vfio_user_qemu -- vhost/common.sh@54 -- # '[' iso == iso ']'
00:13:50.395   10:10:45 vfio_user_qemu -- vhost/common.sh@55 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh reset
00:13:51.776  Waiting for block devices as requested
00:13:51.776  0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma
00:13:51.776  0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma
00:13:51.776  0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma
00:13:51.776  0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma
00:13:51.776  0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma
00:13:52.036  0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma
00:13:52.036  0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma
00:13:52.036  0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma
00:13:52.036  0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma
00:13:52.296  0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma
00:13:52.296  0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma
00:13:52.296  0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma
00:13:52.555  0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma
00:13:52.555  0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma
00:13:52.555  0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma
00:13:52.555  0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma
00:13:52.814  0000:85:00.0 (8086 0a54): vfio-pci -> nvme
00:13:52.814  
00:13:52.814  real	6m30.297s
00:13:52.814  user	27m23.984s
00:13:52.814  sys	0m18.591s
00:13:52.814   10:10:47 vfio_user_qemu -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:52.814   10:10:47 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:52.814  ************************************
00:13:52.814  END TEST vfio_user_qemu
00:13:52.814  ************************************
00:13:52.814   10:10:47  -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']'
00:13:52.814   10:10:47  -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']'
00:13:52.814   10:10:47  -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']'
00:13:52.814   10:10:47  -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']'
00:13:52.814   10:10:47  -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']'
00:13:52.814   10:10:47  -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']'
00:13:52.814   10:10:47  -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']'
00:13:52.814   10:10:47  -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']'
00:13:52.814   10:10:47  -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']'
00:13:53.073   10:10:47  -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]]
00:13:53.073   10:10:47  -- spdk/autotest.sh@370 -- # [[ 1 -eq 1 ]]
00:13:53.073   10:10:47  -- spdk/autotest.sh@371 -- # run_test sma /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/sma.sh
00:13:53.073   10:10:47  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:53.073   10:10:47  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:53.073   10:10:47  -- common/autotest_common.sh@10 -- # set +x
00:13:53.073  ************************************
00:13:53.073  START TEST sma
00:13:53.073  ************************************
00:13:53.073   10:10:47 sma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/sma.sh
00:13:53.073  * Looking for test storage...
00:13:53.073  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:13:53.073    10:10:48 sma -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:13:53.073     10:10:48 sma -- common/autotest_common.sh@1693 -- # lcov --version
00:13:53.073     10:10:48 sma -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:13:53.073    10:10:48 sma -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:13:53.073    10:10:48 sma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:13:53.073    10:10:48 sma -- scripts/common.sh@333 -- # local ver1 ver1_l
00:13:53.073    10:10:48 sma -- scripts/common.sh@334 -- # local ver2 ver2_l
00:13:53.073    10:10:48 sma -- scripts/common.sh@336 -- # IFS=.-:
00:13:53.073    10:10:48 sma -- scripts/common.sh@336 -- # read -ra ver1
00:13:53.073    10:10:48 sma -- scripts/common.sh@337 -- # IFS=.-:
00:13:53.073    10:10:48 sma -- scripts/common.sh@337 -- # read -ra ver2
00:13:53.073    10:10:48 sma -- scripts/common.sh@338 -- # local 'op=<'
00:13:53.073    10:10:48 sma -- scripts/common.sh@340 -- # ver1_l=2
00:13:53.073    10:10:48 sma -- scripts/common.sh@341 -- # ver2_l=1
00:13:53.073    10:10:48 sma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:13:53.073    10:10:48 sma -- scripts/common.sh@344 -- # case "$op" in
00:13:53.073    10:10:48 sma -- scripts/common.sh@345 -- # : 1
00:13:53.073    10:10:48 sma -- scripts/common.sh@364 -- # (( v = 0 ))
00:13:53.073    10:10:48 sma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:13:53.073     10:10:48 sma -- scripts/common.sh@365 -- # decimal 1
00:13:53.073     10:10:48 sma -- scripts/common.sh@353 -- # local d=1
00:13:53.073     10:10:48 sma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:53.073     10:10:48 sma -- scripts/common.sh@355 -- # echo 1
00:13:53.073    10:10:48 sma -- scripts/common.sh@365 -- # ver1[v]=1
00:13:53.073     10:10:48 sma -- scripts/common.sh@366 -- # decimal 2
00:13:53.073     10:10:48 sma -- scripts/common.sh@353 -- # local d=2
00:13:53.073     10:10:48 sma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:13:53.073     10:10:48 sma -- scripts/common.sh@355 -- # echo 2
00:13:53.073    10:10:48 sma -- scripts/common.sh@366 -- # ver2[v]=2
00:13:53.073    10:10:48 sma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:13:53.073    10:10:48 sma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:13:53.073    10:10:48 sma -- scripts/common.sh@368 -- # return 0
00:13:53.073    10:10:48 sma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:13:53.073    10:10:48 sma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:13:53.073  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:53.073  		--rc genhtml_branch_coverage=1
00:13:53.073  		--rc genhtml_function_coverage=1
00:13:53.073  		--rc genhtml_legend=1
00:13:53.073  		--rc geninfo_all_blocks=1
00:13:53.073  		--rc geninfo_unexecuted_blocks=1
00:13:53.073  		
00:13:53.073  		'
00:13:53.073    10:10:48 sma -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:13:53.073  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:53.073  		--rc genhtml_branch_coverage=1
00:13:53.073  		--rc genhtml_function_coverage=1
00:13:53.073  		--rc genhtml_legend=1
00:13:53.073  		--rc geninfo_all_blocks=1
00:13:53.073  		--rc geninfo_unexecuted_blocks=1
00:13:53.073  		
00:13:53.073  		'
00:13:53.073    10:10:48 sma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:13:53.073  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:53.073  		--rc genhtml_branch_coverage=1
00:13:53.073  		--rc genhtml_function_coverage=1
00:13:53.073  		--rc genhtml_legend=1
00:13:53.073  		--rc geninfo_all_blocks=1
00:13:53.073  		--rc geninfo_unexecuted_blocks=1
00:13:53.073  		
00:13:53.073  		'
00:13:53.073    10:10:48 sma -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:13:53.074  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:53.074  		--rc genhtml_branch_coverage=1
00:13:53.074  		--rc genhtml_function_coverage=1
00:13:53.074  		--rc genhtml_legend=1
00:13:53.074  		--rc geninfo_all_blocks=1
00:13:53.074  		--rc geninfo_unexecuted_blocks=1
00:13:53.074  		
00:13:53.074  		'
00:13:53.074   10:10:48 sma -- sma/sma.sh@11 -- # run_test sma_nvmf_tcp /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/nvmf_tcp.sh
00:13:53.074   10:10:48 sma -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:53.074   10:10:48 sma -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:53.074   10:10:48 sma -- common/autotest_common.sh@10 -- # set +x
00:13:53.074  ************************************
00:13:53.074  START TEST sma_nvmf_tcp
00:13:53.074  ************************************
00:13:53.074   10:10:48 sma.sma_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/nvmf_tcp.sh
00:13:53.074  * Looking for test storage...
00:13:53.074  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:13:53.074    10:10:48 sma.sma_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:13:53.074     10:10:48 sma.sma_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version
00:13:53.074     10:10:48 sma.sma_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:13:53.332    10:10:48 sma.sma_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:13:53.332    10:10:48 sma.sma_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:13:53.332    10:10:48 sma.sma_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l
00:13:53.332    10:10:48 sma.sma_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l
00:13:53.332    10:10:48 sma.sma_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-:
00:13:53.332    10:10:48 sma.sma_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1
00:13:53.332    10:10:48 sma.sma_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-:
00:13:53.332    10:10:48 sma.sma_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2
00:13:53.332    10:10:48 sma.sma_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<'
00:13:53.332    10:10:48 sma.sma_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2
00:13:53.332    10:10:48 sma.sma_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1
00:13:53.332    10:10:48 sma.sma_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:13:53.332    10:10:48 sma.sma_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in
00:13:53.332    10:10:48 sma.sma_nvmf_tcp -- scripts/common.sh@345 -- # : 1
00:13:53.332    10:10:48 sma.sma_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 ))
00:13:53.332    10:10:48 sma.sma_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:13:53.332     10:10:48 sma.sma_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1
00:13:53.332     10:10:48 sma.sma_nvmf_tcp -- scripts/common.sh@353 -- # local d=1
00:13:53.332     10:10:48 sma.sma_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:53.332     10:10:48 sma.sma_nvmf_tcp -- scripts/common.sh@355 -- # echo 1
00:13:53.332    10:10:48 sma.sma_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1
00:13:53.332     10:10:48 sma.sma_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2
00:13:53.332     10:10:48 sma.sma_nvmf_tcp -- scripts/common.sh@353 -- # local d=2
00:13:53.332     10:10:48 sma.sma_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:13:53.332     10:10:48 sma.sma_nvmf_tcp -- scripts/common.sh@355 -- # echo 2
00:13:53.332    10:10:48 sma.sma_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2
00:13:53.332    10:10:48 sma.sma_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:13:53.332    10:10:48 sma.sma_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:13:53.332    10:10:48 sma.sma_nvmf_tcp -- scripts/common.sh@368 -- # return 0
00:13:53.332    10:10:48 sma.sma_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:13:53.332    10:10:48 sma.sma_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:13:53.332  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:53.332  		--rc genhtml_branch_coverage=1
00:13:53.332  		--rc genhtml_function_coverage=1
00:13:53.332  		--rc genhtml_legend=1
00:13:53.332  		--rc geninfo_all_blocks=1
00:13:53.332  		--rc geninfo_unexecuted_blocks=1
00:13:53.332  		
00:13:53.332  		'
00:13:53.332    10:10:48 sma.sma_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:13:53.332  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:53.332  		--rc genhtml_branch_coverage=1
00:13:53.332  		--rc genhtml_function_coverage=1
00:13:53.332  		--rc genhtml_legend=1
00:13:53.332  		--rc geninfo_all_blocks=1
00:13:53.332  		--rc geninfo_unexecuted_blocks=1
00:13:53.332  		
00:13:53.332  		'
00:13:53.332    10:10:48 sma.sma_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:13:53.332  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:53.332  		--rc genhtml_branch_coverage=1
00:13:53.332  		--rc genhtml_function_coverage=1
00:13:53.332  		--rc genhtml_legend=1
00:13:53.332  		--rc geninfo_all_blocks=1
00:13:53.332  		--rc geninfo_unexecuted_blocks=1
00:13:53.332  		
00:13:53.332  		'
00:13:53.332    10:10:48 sma.sma_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:13:53.332  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:53.332  		--rc genhtml_branch_coverage=1
00:13:53.332  		--rc genhtml_function_coverage=1
00:13:53.332  		--rc genhtml_legend=1
00:13:53.332  		--rc geninfo_all_blocks=1
00:13:53.332  		--rc geninfo_unexecuted_blocks=1
00:13:53.332  		
00:13:53.332  		'
00:13:53.332   10:10:48 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:13:53.332   10:10:48 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@70 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:13:53.332   10:10:48 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@73 -- # tgtpid=1802771
00:13:53.332   10:10:48 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@72 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:13:53.332   10:10:48 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@83 -- # smapid=1802772
00:13:53.332   10:10:48 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@86 -- # sma_waitforlisten
00:13:53.332   10:10:48 sma.sma_nvmf_tcp -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:13:53.332   10:10:48 sma.sma_nvmf_tcp -- sma/common.sh@8 -- # local sma_port=8080
00:13:53.332   10:10:48 sma.sma_nvmf_tcp -- sma/common.sh@10 -- # (( i = 0 ))
00:13:53.332   10:10:48 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@75 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:13:53.332   10:10:48 sma.sma_nvmf_tcp -- sma/common.sh@10 -- # (( i < 5 ))
00:13:53.332    10:10:48 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@75 -- # cat
00:13:53.333   10:10:48 sma.sma_nvmf_tcp -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:13:53.333   10:10:48 sma.sma_nvmf_tcp -- sma/common.sh@14 -- # sleep 1s
00:13:53.333  [2024-11-20 10:10:48.352902] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:13:53.333  [2024-11-20 10:10:48.353035] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1802771 ]
00:13:53.333  EAL: No free 2048 kB hugepages reported on node 1
00:13:53.592  [2024-11-20 10:10:48.485159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:53.592  [2024-11-20 10:10:48.601130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:54.162   10:10:49 sma.sma_nvmf_tcp -- sma/common.sh@10 -- # (( i++ ))
00:13:54.162   10:10:49 sma.sma_nvmf_tcp -- sma/common.sh@10 -- # (( i < 5 ))
00:13:54.162   10:10:49 sma.sma_nvmf_tcp -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:13:54.420   10:10:49 sma.sma_nvmf_tcp -- sma/common.sh@14 -- # sleep 1s
00:13:54.420  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:54.420  I0000 00:00:1732093849.501167 1802772 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:54.420  [2024-11-20 10:10:49.514767] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:13:55.362   10:10:50 sma.sma_nvmf_tcp -- sma/common.sh@10 -- # (( i++ ))
00:13:55.362   10:10:50 sma.sma_nvmf_tcp -- sma/common.sh@10 -- # (( i < 5 ))
00:13:55.362   10:10:50 sma.sma_nvmf_tcp -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:13:55.362   10:10:50 sma.sma_nvmf_tcp -- sma/common.sh@12 -- # return 0
00:13:55.362   10:10:50 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@89 -- # rpc_cmd bdev_null_create null0 100 4096
00:13:55.362   10:10:50 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:55.362   10:10:50 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:55.362  null0
00:13:55.362   10:10:50 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:55.362   10:10:50 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@92 -- # rpc_cmd nvmf_get_transports --trtype tcp
00:13:55.362   10:10:50 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:55.362   10:10:50 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:55.362  [
00:13:55.362  {
00:13:55.362  "trtype": "TCP",
00:13:55.362  "max_queue_depth": 128,
00:13:55.362  "max_io_qpairs_per_ctrlr": 127,
00:13:55.362  "in_capsule_data_size": 4096,
00:13:55.362  "max_io_size": 131072,
00:13:55.362  "io_unit_size": 131072,
00:13:55.362  "max_aq_depth": 128,
00:13:55.362  "num_shared_buffers": 511,
00:13:55.362  "buf_cache_size": 4294967295,
00:13:55.362  "dif_insert_or_strip": false,
00:13:55.362  "zcopy": false,
00:13:55.362  "c2h_success": true,
00:13:55.362  "sock_priority": 0,
00:13:55.362  "abort_timeout_sec": 1,
00:13:55.362  "ack_timeout": 0,
00:13:55.362  "data_wr_pool_size": 0
00:13:55.362  }
00:13:55.362  ]
00:13:55.362   10:10:50 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:55.362    10:10:50 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@95 -- # create_device nqn.2016-06.io.spdk:cnode0
00:13:55.362    10:10:50 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:55.362    10:10:50 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@95 -- # jq -r .handle
00:13:55.621  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:55.621  I0000 00:00:1732093850.598629 1803071 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:55.621  I0000 00:00:1732093850.600472 1803071 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:55.621  I0000 00:00:1732093850.601966 1803078 subchannel.cc:806] subchannel 0x55a4bd0cd180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55a4bcfda1c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55a4bd07e460, grpc.internal.client_channel_call_destination=0x7f24515f2390, grpc.internal.event_engine=0x55a4bd040440, grpc.internal.security_connector=0x55a4bcf36650, grpc.internal.subchannel_pool=0x55a4bd0b4c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55a4bccfd2f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:10:50.601469763+01:00"}), backing off for 999 ms
00:13:55.621  [2024-11-20 10:10:50.622610] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:13:55.621   10:10:50 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@95 -- # devid0=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:13:55.621   10:10:50 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@96 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:13:55.621   10:10:50 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:55.621   10:10:50 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:55.621  [
00:13:55.621  {
00:13:55.621  "nqn": "nqn.2016-06.io.spdk:cnode0",
00:13:55.621  "subtype": "NVMe",
00:13:55.621  "listen_addresses": [
00:13:55.621  {
00:13:55.621  "trtype": "TCP",
00:13:55.621  "adrfam": "IPv4",
00:13:55.621  "traddr": "127.0.0.1",
00:13:55.621  "trsvcid": "4420"
00:13:55.621  }
00:13:55.621  ],
00:13:55.621  "allow_any_host": false,
00:13:55.621  "hosts": [],
00:13:55.621  "serial_number": "00000000000000000000",
00:13:55.621  "model_number": "SPDK bdev Controller",
00:13:55.621  "max_namespaces": 32,
00:13:55.621  "min_cntlid": 1,
00:13:55.621  "max_cntlid": 65519,
00:13:55.621  "namespaces": []
00:13:55.621  }
00:13:55.621  ]
00:13:55.621   10:10:50 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:55.621    10:10:50 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@98 -- # create_device nqn.2016-06.io.spdk:cnode1
00:13:55.621    10:10:50 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@98 -- # jq -r .handle
00:13:55.621    10:10:50 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:55.879  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:55.879  I0000 00:00:1732093850.890248 1803106 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:55.880  I0000 00:00:1732093850.892057 1803106 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:55.880  I0000 00:00:1732093850.893780 1803107 subchannel.cc:806] subchannel 0x5585ecc7a180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5585ecb871c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5585ecc2b460, grpc.internal.client_channel_call_destination=0x7f4dcb1b7390, grpc.internal.event_engine=0x5585ecbed440, grpc.internal.security_connector=0x5585ecae3650, grpc.internal.subchannel_pool=0x5585ecc61c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5585ec8aa2f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:10:50.893241179+01:00"}), backing off for 1000 ms
00:13:55.880   10:10:50 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@98 -- # devid1=nvmf-tcp:nqn.2016-06.io.spdk:cnode1
00:13:55.880   10:10:50 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@99 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:13:55.880   10:10:50 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:55.880   10:10:50 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:55.880  [
00:13:55.880  {
00:13:55.880  "nqn": "nqn.2016-06.io.spdk:cnode0",
00:13:55.880  "subtype": "NVMe",
00:13:55.880  "listen_addresses": [
00:13:55.880  {
00:13:55.880  "trtype": "TCP",
00:13:55.880  "adrfam": "IPv4",
00:13:55.880  "traddr": "127.0.0.1",
00:13:55.880  "trsvcid": "4420"
00:13:55.880  }
00:13:55.880  ],
00:13:55.880  "allow_any_host": false,
00:13:55.880  "hosts": [],
00:13:55.880  "serial_number": "00000000000000000000",
00:13:55.880  "model_number": "SPDK bdev Controller",
00:13:55.880  "max_namespaces": 32,
00:13:55.880  "min_cntlid": 1,
00:13:55.880  "max_cntlid": 65519,
00:13:55.880  "namespaces": []
00:13:55.880  }
00:13:55.880  ]
00:13:55.880   10:10:50 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:55.880   10:10:50 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@100 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1
00:13:55.880   10:10:50 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:55.880   10:10:50 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:55.880  [
00:13:55.880  {
00:13:55.880  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:13:55.880  "subtype": "NVMe",
00:13:55.880  "listen_addresses": [
00:13:55.880  {
00:13:55.880  "trtype": "TCP",
00:13:55.880  "adrfam": "IPv4",
00:13:55.880  "traddr": "127.0.0.1",
00:13:55.880  "trsvcid": "4420"
00:13:55.880  }
00:13:55.880  ],
00:13:55.880  "allow_any_host": false,
00:13:55.880  "hosts": [],
00:13:55.880  "serial_number": "00000000000000000000",
00:13:55.880  "model_number": "SPDK bdev Controller",
00:13:55.880  "max_namespaces": 32,
00:13:55.880  "min_cntlid": 1,
00:13:55.880  "max_cntlid": 65519,
00:13:55.880  "namespaces": []
00:13:55.880  }
00:13:55.880  ]
00:13:55.880   10:10:50 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:55.880   10:10:50 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@101 -- # [[ nvmf-tcp:nqn.2016-06.io.spdk:cnode0 != \n\v\m\f\-\t\c\p\:\n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]]
00:13:55.880    10:10:50 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@104 -- # rpc_cmd nvmf_get_subsystems
00:13:55.880    10:10:50 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@104 -- # jq -r '. | length'
00:13:55.880    10:10:50 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:55.880    10:10:50 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:55.880    10:10:50 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:56.139   10:10:50 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@104 -- # [[ 3 -eq 3 ]]
00:13:56.139    10:10:50 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@108 -- # create_device nqn.2016-06.io.spdk:cnode0
00:13:56.139    10:10:51 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:56.139    10:10:50 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@108 -- # jq -r .handle
00:13:56.397  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:56.397  I0000 00:00:1732093851.262601 1803133 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:56.397  I0000 00:00:1732093851.264345 1803133 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:56.397  I0000 00:00:1732093851.265960 1803258 subchannel.cc:806] subchannel 0x559574a7a180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5595749871c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x559574a2b460, grpc.internal.client_channel_call_destination=0x7f2606cbe390, grpc.internal.event_engine=0x5595749ed440, grpc.internal.security_connector=0x5595748e3650, grpc.internal.subchannel_pool=0x559574a61c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5595746aa2f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:10:51.265421522+01:00"}), backing off for 999 ms
00:13:56.397   10:10:51 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@108 -- # tmp0=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:13:56.397    10:10:51 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@109 -- # create_device nqn.2016-06.io.spdk:cnode1
00:13:56.397    10:10:51 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@109 -- # jq -r .handle
00:13:56.397    10:10:51 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:56.656  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:56.656  I0000 00:00:1732093851.523587 1803281 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:56.656  I0000 00:00:1732093851.525661 1803281 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:56.656  I0000 00:00:1732093851.527205 1803287 subchannel.cc:806] subchannel 0x5617cd7e8180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5617cd6f51c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5617cd799460, grpc.internal.client_channel_call_destination=0x7fee855c0390, grpc.internal.event_engine=0x5617cd75b440, grpc.internal.security_connector=0x5617cd651650, grpc.internal.subchannel_pool=0x5617cd7cfc10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5617cd4182f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:10:51.526696794+01:00"}), backing off for 1000 ms
00:13:56.656   10:10:51 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@109 -- # tmp1=nvmf-tcp:nqn.2016-06.io.spdk:cnode1
00:13:56.656    10:10:51 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@111 -- # rpc_cmd nvmf_get_subsystems
00:13:56.656    10:10:51 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@111 -- # jq -r '. | length'
00:13:56.656    10:10:51 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:56.656    10:10:51 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:56.656    10:10:51 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:56.656   10:10:51 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@111 -- # [[ 3 -eq 3 ]]
00:13:56.656   10:10:51 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@112 -- # [[ nvmf-tcp:nqn.2016-06.io.spdk:cnode0 == \n\v\m\f\-\t\c\p\:\n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]]
00:13:56.656   10:10:51 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@113 -- # [[ nvmf-tcp:nqn.2016-06.io.spdk:cnode1 == \n\v\m\f\-\t\c\p\:\n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]]
00:13:56.656   10:10:51 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@116 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:13:56.656   10:10:51 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:56.915  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:56.915  I0000 00:00:1732093851.822679 1803311 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:56.915  I0000 00:00:1732093851.824481 1803311 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:56.915  I0000 00:00:1732093851.825973 1803312 subchannel.cc:806] subchannel 0x561de4bf1180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x561de4afe1c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x561de4ba2460, grpc.internal.client_channel_call_destination=0x7f41e5c9b390, grpc.internal.event_engine=0x561de4b64440, grpc.internal.security_connector=0x561de4a4dda0, grpc.internal.subchannel_pool=0x561de4bd8c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x561de48212f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:10:51.825461472+01:00"}), backing off for 999 ms
00:13:56.915  {}
00:13:56.915   10:10:51 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@117 -- # NOT rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:13:56.915   10:10:51 sma.sma_nvmf_tcp -- common/autotest_common.sh@652 -- # local es=0
00:13:56.915   10:10:51 sma.sma_nvmf_tcp -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:13:56.915   10:10:51 sma.sma_nvmf_tcp -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:13:56.915   10:10:51 sma.sma_nvmf_tcp -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:56.915    10:10:51 sma.sma_nvmf_tcp -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:13:56.915   10:10:51 sma.sma_nvmf_tcp -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:56.915   10:10:51 sma.sma_nvmf_tcp -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:13:56.915   10:10:51 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:56.915   10:10:51 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:56.915  [2024-11-20 10:10:51.869731] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:cnode0' does not exist
00:13:56.915  request:
00:13:56.915  {
00:13:56.915  "nqn": "nqn.2016-06.io.spdk:cnode0",
00:13:56.915  "method": "nvmf_get_subsystems",
00:13:56.915  "req_id": 1
00:13:56.915  }
00:13:56.915  Got JSON-RPC error response
00:13:56.915  response:
00:13:56.915  {
00:13:56.915  "code": -19,
00:13:56.915  "message": "No such device"
00:13:56.915  }
00:13:56.915   10:10:51 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:13:56.915   10:10:51 sma.sma_nvmf_tcp -- common/autotest_common.sh@655 -- # es=1
00:13:56.915   10:10:51 sma.sma_nvmf_tcp -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:56.915   10:10:51 sma.sma_nvmf_tcp -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:13:56.915   10:10:51 sma.sma_nvmf_tcp -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:56.915    10:10:51 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@118 -- # rpc_cmd nvmf_get_subsystems
00:13:56.915    10:10:51 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:56.915    10:10:51 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:56.915    10:10:51 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@118 -- # jq -r '. | length'
00:13:56.915    10:10:51 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:56.915   10:10:51 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@118 -- # [[ 2 -eq 2 ]]
00:13:56.915   10:10:51 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@120 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:cnode1
00:13:56.915   10:10:51 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:57.175  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:57.175  I0000 00:00:1732093852.148149 1803336 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:57.175  I0000 00:00:1732093852.149957 1803336 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:57.175  I0000 00:00:1732093852.151546 1803343 subchannel.cc:806] subchannel 0x5603e2898180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5603e27a51c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5603e2849460, grpc.internal.client_channel_call_destination=0x7f4710842390, grpc.internal.event_engine=0x5603e280b440, grpc.internal.security_connector=0x5603e26f4da0, grpc.internal.subchannel_pool=0x5603e287fc10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5603e24c82f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:10:52.151008674+01:00"}), backing off for 1000 ms
00:13:57.175  {}
00:13:57.175   10:10:52 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@121 -- # NOT rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1
00:13:57.175   10:10:52 sma.sma_nvmf_tcp -- common/autotest_common.sh@652 -- # local es=0
00:13:57.175   10:10:52 sma.sma_nvmf_tcp -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1
00:13:57.175   10:10:52 sma.sma_nvmf_tcp -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:13:57.175   10:10:52 sma.sma_nvmf_tcp -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:57.175    10:10:52 sma.sma_nvmf_tcp -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:13:57.175   10:10:52 sma.sma_nvmf_tcp -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:57.175   10:10:52 sma.sma_nvmf_tcp -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1
00:13:57.175   10:10:52 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:57.175   10:10:52 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:57.175  [2024-11-20 10:10:52.194718] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:cnode1' does not exist
00:13:57.175  request:
00:13:57.175  {
00:13:57.175  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:13:57.175  "method": "nvmf_get_subsystems",
00:13:57.175  "req_id": 1
00:13:57.175  }
00:13:57.175  Got JSON-RPC error response
00:13:57.175  response:
00:13:57.175  {
00:13:57.175  "code": -19,
00:13:57.175  "message": "No such device"
00:13:57.175  }
00:13:57.175   10:10:52 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:13:57.175   10:10:52 sma.sma_nvmf_tcp -- common/autotest_common.sh@655 -- # es=1
00:13:57.175   10:10:52 sma.sma_nvmf_tcp -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:57.175   10:10:52 sma.sma_nvmf_tcp -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:13:57.175   10:10:52 sma.sma_nvmf_tcp -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:57.175    10:10:52 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@122 -- # rpc_cmd nvmf_get_subsystems
00:13:57.175    10:10:52 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@122 -- # jq -r '. | length'
00:13:57.175    10:10:52 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:57.175    10:10:52 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:57.175    10:10:52 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:57.175   10:10:52 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@122 -- # [[ 1 -eq 1 ]]
00:13:57.175   10:10:52 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@125 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:13:57.175   10:10:52 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:57.434  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:57.434  I0000 00:00:1732093852.469632 1803438 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:57.434  I0000 00:00:1732093852.471513 1803438 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:57.434  I0000 00:00:1732093852.473028 1803492 subchannel.cc:806] subchannel 0x5601431eb180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5601430f81c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x56014319c460, grpc.internal.client_channel_call_destination=0x7f2acfca1390, grpc.internal.event_engine=0x56014315e440, grpc.internal.security_connector=0x560143047da0, grpc.internal.subchannel_pool=0x5601431d2c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x560142e1b2f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:10:52.472515764+01:00"}), backing off for 999 ms
00:13:57.434  {}
00:13:57.434   10:10:52 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@126 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:cnode1
00:13:57.434   10:10:52 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:57.691  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:57.691  I0000 00:00:1732093852.722563 1803512 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:57.691  I0000 00:00:1732093852.724294 1803512 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:57.691  I0000 00:00:1732093852.725871 1803519 subchannel.cc:806] subchannel 0x5569d44bf180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5569d43cc1c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5569d4470460, grpc.internal.client_channel_call_destination=0x7f50eb1a2390, grpc.internal.event_engine=0x5569d4432440, grpc.internal.security_connector=0x5569d431bda0, grpc.internal.subchannel_pool=0x5569d44a6c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5569d40ef2f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:10:52.725304672+01:00"}), backing off for 1000 ms
00:13:57.691  {}
00:13:57.691    10:10:52 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@129 -- # create_device nqn.2016-06.io.spdk:cnode0
00:13:57.691    10:10:52 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@129 -- # jq -r .handle
00:13:57.691    10:10:52 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:57.950  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:57.950  I0000 00:00:1732093852.980525 1803542 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:57.950  I0000 00:00:1732093852.982273 1803542 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:57.950  I0000 00:00:1732093852.983838 1803543 subchannel.cc:806] subchannel 0x55594a7c4180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55594a6d11c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55594a775460, grpc.internal.client_channel_call_destination=0x7f6cb89fc390, grpc.internal.event_engine=0x55594a737440, grpc.internal.security_connector=0x55594a62d650, grpc.internal.subchannel_pool=0x55594a7abc10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55594a3f42f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:10:52.983307481+01:00"}), backing off for 1000 ms
00:13:57.950  [2024-11-20 10:10:53.001693] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:13:57.950   10:10:53 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@129 -- # devid0=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:13:57.950    10:10:53 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@130 -- # create_device nqn.2016-06.io.spdk:cnode1
00:13:57.950    10:10:53 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@130 -- # jq -r .handle
00:13:57.950    10:10:53 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:58.208  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:58.208  I0000 00:00:1732093853.252861 1803566 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:58.208  I0000 00:00:1732093853.254689 1803566 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:58.208  I0000 00:00:1732093853.256283 1803567 subchannel.cc:806] subchannel 0x55b6e2d39180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55b6e2c461c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55b6e2cea460, grpc.internal.client_channel_call_destination=0x7fd52a6ba390, grpc.internal.event_engine=0x55b6e2cac440, grpc.internal.security_connector=0x55b6e2ba2650, grpc.internal.subchannel_pool=0x55b6e2d20c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55b6e29692f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:10:53.255729877+01:00"}), backing off for 1000 ms
00:13:58.208   10:10:53 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@130 -- # devid1=nvmf-tcp:nqn.2016-06.io.spdk:cnode1
00:13:58.208    10:10:53 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@131 -- # rpc_cmd bdev_get_bdevs -b null0
00:13:58.208    10:10:53 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:58.208    10:10:53 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@131 -- # jq -r '.[].uuid'
00:13:58.208    10:10:53 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:58.208    10:10:53 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:58.466   10:10:53 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@131 -- # uuid=953bdfb8-29ec-4c32-9b6f-4a42fbf29318
00:13:58.467   10:10:53 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@134 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 953bdfb8-29ec-4c32-9b6f-4a42fbf29318
00:13:58.467   10:10:53 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@45 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:58.467    10:10:53 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@45 -- # uuid2base64 953bdfb8-29ec-4c32-9b6f-4a42fbf29318
00:13:58.467    10:10:53 sma.sma_nvmf_tcp -- sma/common.sh@20 -- # python
00:13:58.726  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:58.726  I0000 00:00:1732093853.618282 1803638 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:58.726  I0000 00:00:1732093853.620138 1803638 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:58.726  I0000 00:00:1732093853.621723 1803726 subchannel.cc:806] subchannel 0x561f84504180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x561f844111c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x561f844b5460, grpc.internal.client_channel_call_destination=0x7f95a0490390, grpc.internal.event_engine=0x561f84477440, grpc.internal.security_connector=0x561f844ebd00, grpc.internal.subchannel_pool=0x561f844ebc10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x561f841342f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:10:53.621209109+01:00"}), backing off for 1000 ms
00:13:58.726  {}
00:13:58.726    10:10:53 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@135 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:13:58.726    10:10:53 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:58.726    10:10:53 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@135 -- # jq -r '.[0].namespaces | length'
00:13:58.726    10:10:53 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:58.726    10:10:53 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:58.726   10:10:53 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@135 -- # [[ 1 -eq 1 ]]
00:13:58.726    10:10:53 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@136 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1
00:13:58.726    10:10:53 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:58.726    10:10:53 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:58.726    10:10:53 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@136 -- # jq -r '.[0].namespaces | length'
00:13:58.726    10:10:53 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:58.726   10:10:53 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@136 -- # [[ 0 -eq 0 ]]
00:13:58.726    10:10:53 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@137 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:13:58.726    10:10:53 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:58.726    10:10:53 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@137 -- # jq -r '.[0].namespaces[0].uuid'
00:13:58.726    10:10:53 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:58.726    10:10:53 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:58.726   10:10:53 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@137 -- # [[ 953bdfb8-29ec-4c32-9b6f-4a42fbf29318 == \9\5\3\b\d\f\b\8\-\2\9\e\c\-\4\c\3\2\-\9\b\6\f\-\4\a\4\2\f\b\f\2\9\3\1\8 ]]
00:13:58.726   10:10:53 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@140 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 953bdfb8-29ec-4c32-9b6f-4a42fbf29318
00:13:58.726   10:10:53 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@45 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:58.726    10:10:53 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@45 -- # uuid2base64 953bdfb8-29ec-4c32-9b6f-4a42fbf29318
00:13:58.726    10:10:53 sma.sma_nvmf_tcp -- sma/common.sh@20 -- # python
00:13:58.984  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:58.985  I0000 00:00:1732093854.055164 1803755 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:58.985  I0000 00:00:1732093854.056975 1803755 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:58.985  I0000 00:00:1732093854.058604 1803759 subchannel.cc:806] subchannel 0x555bda0e6180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x555bd9ff31c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x555bda097460, grpc.internal.client_channel_call_destination=0x7f9bb8af7390, grpc.internal.event_engine=0x555bda059440, grpc.internal.security_connector=0x555bda0cdd00, grpc.internal.subchannel_pool=0x555bda0cdc10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x555bd9d162f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:10:54.058085617+01:00"}), backing off for 1000 ms
00:13:58.985  {}
00:13:58.985    10:10:54 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@141 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:13:58.985    10:10:54 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@141 -- # jq -r '.[0].namespaces | length'
00:13:58.985    10:10:54 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:58.985    10:10:54 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:58.985    10:10:54 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:59.243   10:10:54 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@141 -- # [[ 1 -eq 1 ]]
00:13:59.243    10:10:54 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@142 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1
00:13:59.243    10:10:54 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@142 -- # jq -r '.[0].namespaces | length'
00:13:59.243    10:10:54 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:59.243    10:10:54 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:59.243    10:10:54 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:59.243   10:10:54 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@142 -- # [[ 0 -eq 0 ]]
00:13:59.243    10:10:54 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@143 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:13:59.243    10:10:54 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@143 -- # jq -r '.[0].namespaces[0].uuid'
00:13:59.243    10:10:54 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:59.243    10:10:54 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:59.243    10:10:54 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:59.243   10:10:54 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@143 -- # [[ 953bdfb8-29ec-4c32-9b6f-4a42fbf29318 == \9\5\3\b\d\f\b\8\-\2\9\e\c\-\4\c\3\2\-\9\b\6\f\-\4\a\4\2\f\b\f\2\9\3\1\8 ]]
00:13:59.243   10:10:54 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@146 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 953bdfb8-29ec-4c32-9b6f-4a42fbf29318
00:13:59.243   10:10:54 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@59 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:59.243    10:10:54 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@59 -- # uuid2base64 953bdfb8-29ec-4c32-9b6f-4a42fbf29318
00:13:59.243    10:10:54 sma.sma_nvmf_tcp -- sma/common.sh@20 -- # python
00:13:59.501  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:59.502  I0000 00:00:1732093854.490355 1803788 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:59.502  I0000 00:00:1732093854.492346 1803788 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:59.502  I0000 00:00:1732093854.493928 1803878 subchannel.cc:806] subchannel 0x5648305ce180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5648304db1c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x56483057f460, grpc.internal.client_channel_call_destination=0x7f7d4db2e390, grpc.internal.event_engine=0x564830541440, grpc.internal.security_connector=0x564830437650, grpc.internal.subchannel_pool=0x5648305b5c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5648301fe2f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:10:54.493392626+01:00"}), backing off for 999 ms
00:13:59.502  {}
00:13:59.502    10:10:54 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@147 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:13:59.502    10:10:54 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:59.502    10:10:54 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@147 -- # jq -r '.[0].namespaces | length'
00:13:59.502    10:10:54 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:59.502    10:10:54 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:59.502   10:10:54 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@147 -- # [[ 0 -eq 0 ]]
00:13:59.502    10:10:54 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@148 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1
00:13:59.502    10:10:54 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:59.502    10:10:54 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:59.502    10:10:54 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@148 -- # jq -r '.[0].namespaces | length'
00:13:59.502    10:10:54 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:59.502   10:10:54 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@148 -- # [[ 0 -eq 0 ]]
00:13:59.502   10:10:54 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@151 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 953bdfb8-29ec-4c32-9b6f-4a42fbf29318
00:13:59.502   10:10:54 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@59 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:59.502    10:10:54 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@59 -- # uuid2base64 953bdfb8-29ec-4c32-9b6f-4a42fbf29318
00:13:59.502    10:10:54 sma.sma_nvmf_tcp -- sma/common.sh@20 -- # python
00:14:00.071  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:00.071  I0000 00:00:1732093854.896922 1803945 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:00.071  I0000 00:00:1732093854.898627 1803945 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:00.071  I0000 00:00:1732093854.900084 1803953 subchannel.cc:806] subchannel 0x55982087a180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5598207871c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55982082b460, grpc.internal.client_channel_call_destination=0x7f07d298e390, grpc.internal.event_engine=0x5598207ed440, grpc.internal.security_connector=0x5598206e3650, grpc.internal.subchannel_pool=0x559820861c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5598204aa2f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:10:54.899612499+01:00"}), backing off for 999 ms
00:14:00.071  {}
00:14:00.071   10:10:54 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@153 -- # cleanup
00:14:00.071   10:10:54 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@13 -- # killprocess 1802771
00:14:00.071   10:10:54 sma.sma_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1802771 ']'
00:14:00.071   10:10:54 sma.sma_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1802771
00:14:00.071    10:10:54 sma.sma_nvmf_tcp -- common/autotest_common.sh@959 -- # uname
00:14:00.071   10:10:54 sma.sma_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:00.071    10:10:54 sma.sma_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1802771
00:14:00.071   10:10:54 sma.sma_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:14:00.071   10:10:54 sma.sma_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:14:00.071   10:10:54 sma.sma_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1802771'
00:14:00.071  killing process with pid 1802771
00:14:00.071   10:10:54 sma.sma_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 1802771
00:14:00.071   10:10:54 sma.sma_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 1802771
00:14:02.070   10:10:57 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@14 -- # killprocess 1802772
00:14:02.070   10:10:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1802772 ']'
00:14:02.070   10:10:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1802772
00:14:02.070    10:10:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@959 -- # uname
00:14:02.070   10:10:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:02.070    10:10:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1802772
00:14:02.070   10:10:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=python3
00:14:02.070   10:10:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:14:02.070   10:10:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1802772'
00:14:02.070  killing process with pid 1802772
00:14:02.070   10:10:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 1802772
00:14:02.070   10:10:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 1802772
00:14:02.070   10:10:57 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@154 -- # trap - SIGINT SIGTERM EXIT
00:14:02.070  
00:14:02.070  real	0m9.000s
00:14:02.070  user	0m12.621s
00:14:02.070  sys	0m1.324s
00:14:02.070   10:10:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:02.070   10:10:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:02.070  ************************************
00:14:02.070  END TEST sma_nvmf_tcp
00:14:02.070  ************************************
00:14:02.070   10:10:57 sma -- sma/sma.sh@12 -- # run_test sma_vfiouser_qemu /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/vfiouser_qemu.sh
00:14:02.070   10:10:57 sma -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:02.070   10:10:57 sma -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:02.070   10:10:57 sma -- common/autotest_common.sh@10 -- # set +x
00:14:02.070  ************************************
00:14:02.070  START TEST sma_vfiouser_qemu
00:14:02.070  ************************************
00:14:02.070   10:10:57 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/vfiouser_qemu.sh
00:14:02.330  * Looking for test storage...
00:14:02.330  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:14:02.330    10:10:57 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:14:02.330     10:10:57 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1693 -- # lcov --version
00:14:02.330     10:10:57 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:14:02.330    10:10:57 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:14:02.330    10:10:57 sma.sma_vfiouser_qemu -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:14:02.330    10:10:57 sma.sma_vfiouser_qemu -- scripts/common.sh@333 -- # local ver1 ver1_l
00:14:02.330    10:10:57 sma.sma_vfiouser_qemu -- scripts/common.sh@334 -- # local ver2 ver2_l
00:14:02.330    10:10:57 sma.sma_vfiouser_qemu -- scripts/common.sh@336 -- # IFS=.-:
00:14:02.330    10:10:57 sma.sma_vfiouser_qemu -- scripts/common.sh@336 -- # read -ra ver1
00:14:02.330    10:10:57 sma.sma_vfiouser_qemu -- scripts/common.sh@337 -- # IFS=.-:
00:14:02.330    10:10:57 sma.sma_vfiouser_qemu -- scripts/common.sh@337 -- # read -ra ver2
00:14:02.330    10:10:57 sma.sma_vfiouser_qemu -- scripts/common.sh@338 -- # local 'op=<'
00:14:02.330    10:10:57 sma.sma_vfiouser_qemu -- scripts/common.sh@340 -- # ver1_l=2
00:14:02.330    10:10:57 sma.sma_vfiouser_qemu -- scripts/common.sh@341 -- # ver2_l=1
00:14:02.330    10:10:57 sma.sma_vfiouser_qemu -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:14:02.330    10:10:57 sma.sma_vfiouser_qemu -- scripts/common.sh@344 -- # case "$op" in
00:14:02.330    10:10:57 sma.sma_vfiouser_qemu -- scripts/common.sh@345 -- # : 1
00:14:02.330    10:10:57 sma.sma_vfiouser_qemu -- scripts/common.sh@364 -- # (( v = 0 ))
00:14:02.330    10:10:57 sma.sma_vfiouser_qemu -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:02.330     10:10:57 sma.sma_vfiouser_qemu -- scripts/common.sh@365 -- # decimal 1
00:14:02.330     10:10:57 sma.sma_vfiouser_qemu -- scripts/common.sh@353 -- # local d=1
00:14:02.330     10:10:57 sma.sma_vfiouser_qemu -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:02.330     10:10:57 sma.sma_vfiouser_qemu -- scripts/common.sh@355 -- # echo 1
00:14:02.330    10:10:57 sma.sma_vfiouser_qemu -- scripts/common.sh@365 -- # ver1[v]=1
00:14:02.330     10:10:57 sma.sma_vfiouser_qemu -- scripts/common.sh@366 -- # decimal 2
00:14:02.330     10:10:57 sma.sma_vfiouser_qemu -- scripts/common.sh@353 -- # local d=2
00:14:02.330     10:10:57 sma.sma_vfiouser_qemu -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:02.330     10:10:57 sma.sma_vfiouser_qemu -- scripts/common.sh@355 -- # echo 2
00:14:02.330    10:10:57 sma.sma_vfiouser_qemu -- scripts/common.sh@366 -- # ver2[v]=2
00:14:02.330    10:10:57 sma.sma_vfiouser_qemu -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:14:02.330    10:10:57 sma.sma_vfiouser_qemu -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:14:02.330    10:10:57 sma.sma_vfiouser_qemu -- scripts/common.sh@368 -- # return 0
00:14:02.330    10:10:57 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:02.330    10:10:57 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:14:02.330  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:02.330  		--rc genhtml_branch_coverage=1
00:14:02.330  		--rc genhtml_function_coverage=1
00:14:02.330  		--rc genhtml_legend=1
00:14:02.330  		--rc geninfo_all_blocks=1
00:14:02.330  		--rc geninfo_unexecuted_blocks=1
00:14:02.330  		
00:14:02.330  		'
00:14:02.330    10:10:57 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:14:02.330  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:02.330  		--rc genhtml_branch_coverage=1
00:14:02.330  		--rc genhtml_function_coverage=1
00:14:02.330  		--rc genhtml_legend=1
00:14:02.330  		--rc geninfo_all_blocks=1
00:14:02.330  		--rc geninfo_unexecuted_blocks=1
00:14:02.330  		
00:14:02.330  		'
00:14:02.330    10:10:57 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:14:02.330  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:02.330  		--rc genhtml_branch_coverage=1
00:14:02.330  		--rc genhtml_function_coverage=1
00:14:02.330  		--rc genhtml_legend=1
00:14:02.330  		--rc geninfo_all_blocks=1
00:14:02.330  		--rc geninfo_unexecuted_blocks=1
00:14:02.330  		
00:14:02.330  		'
00:14:02.330    10:10:57 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:14:02.330  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:02.330  		--rc genhtml_branch_coverage=1
00:14:02.331  		--rc genhtml_function_coverage=1
00:14:02.331  		--rc genhtml_legend=1
00:14:02.331  		--rc geninfo_all_blocks=1
00:14:02.331  		--rc geninfo_unexecuted_blocks=1
00:14:02.331  		
00:14:02.331  		'
00:14:02.331   10:10:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh
00:14:02.331    10:10:57 sma.sma_vfiouser_qemu -- vfio_user/common.sh@6 -- # : 128
00:14:02.331    10:10:57 sma.sma_vfiouser_qemu -- vfio_user/common.sh@7 -- # : 512
00:14:02.331    10:10:57 sma.sma_vfiouser_qemu -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:14:02.331     10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@6 -- # : false
00:14:02.331     10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@7 -- # : /root/vhost_test
00:14:02.331     10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@8 -- # : /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:14:02.331     10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@9 -- # : qemu-img
00:14:02.331      10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:14:02.331     10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:14:02.331     10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:14:02.331     10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:14:02.331     10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:14:02.331     10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:14:02.331     10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:14:02.331       10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/vfiouser_qemu.sh
00:14:02.331      10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:14:02.331     10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:14:02.331     10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:14:02.331     10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:14:02.331     10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:14:02.331     10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:14:02.331     10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:14:02.331      10:10:57 sma.sma_vfiouser_qemu -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:14:02.331      10:10:57 sma.sma_vfiouser_qemu -- common/autotest.config@2 -- # vhost_0_main_core=0
00:14:02.331      10:10:57 sma.sma_vfiouser_qemu -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:14:02.331      10:10:57 sma.sma_vfiouser_qemu -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:14:02.331      10:10:57 sma.sma_vfiouser_qemu -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:14:02.331      10:10:57 sma.sma_vfiouser_qemu -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:14:02.331      10:10:57 sma.sma_vfiouser_qemu -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:14:02.331      10:10:57 sma.sma_vfiouser_qemu -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:14:02.331      10:10:57 sma.sma_vfiouser_qemu -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:14:02.331      10:10:57 sma.sma_vfiouser_qemu -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:14:02.331      10:10:57 sma.sma_vfiouser_qemu -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:14:02.331      10:10:57 sma.sma_vfiouser_qemu -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:14:02.331      10:10:57 sma.sma_vfiouser_qemu -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:14:02.331      10:10:57 sma.sma_vfiouser_qemu -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:14:02.331      10:10:57 sma.sma_vfiouser_qemu -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:14:02.331      10:10:57 sma.sma_vfiouser_qemu -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:14:02.331      10:10:57 sma.sma_vfiouser_qemu -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:14:02.331      10:10:57 sma.sma_vfiouser_qemu -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:14:02.331      10:10:57 sma.sma_vfiouser_qemu -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:14:02.331      10:10:57 sma.sma_vfiouser_qemu -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:14:02.331      10:10:57 sma.sma_vfiouser_qemu -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:14:02.331      10:10:57 sma.sma_vfiouser_qemu -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:14:02.331      10:10:57 sma.sma_vfiouser_qemu -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:14:02.331      10:10:57 sma.sma_vfiouser_qemu -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:14:02.331      10:10:57 sma.sma_vfiouser_qemu -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:14:02.331      10:10:57 sma.sma_vfiouser_qemu -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:14:02.331     10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:14:02.331      10:10:57 sma.sma_vfiouser_qemu -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:14:02.331      10:10:57 sma.sma_vfiouser_qemu -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:14:02.331      10:10:57 sma.sma_vfiouser_qemu -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:14:02.331      10:10:57 sma.sma_vfiouser_qemu -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:14:02.331      10:10:57 sma.sma_vfiouser_qemu -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:14:02.331      10:10:57 sma.sma_vfiouser_qemu -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:14:02.331       10:10:57 sma.sma_vfiouser_qemu -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:14:02.331        10:10:57 sma.sma_vfiouser_qemu -- scheduler/cgroups.sh@244 -- # check_cgroup
00:14:02.331        10:10:57 sma.sma_vfiouser_qemu -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:14:02.331        10:10:57 sma.sma_vfiouser_qemu -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:14:02.331        10:10:57 sma.sma_vfiouser_qemu -- scheduler/cgroups.sh@10 -- # echo 2
00:14:02.331       10:10:57 sma.sma_vfiouser_qemu -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:14:02.331    10:10:57 sma.sma_vfiouser_qemu -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:14:02.331    10:10:57 sma.sma_vfiouser_qemu -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]]
00:14:02.331    10:10:57 sma.sma_vfiouser_qemu -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:14:02.331   10:10:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:14:02.331   10:10:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@104 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:14:02.331   10:10:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@107 -- # VM_PASSWORD=root
00:14:02.331   10:10:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@108 -- # vm_no=0
00:14:02.331   10:10:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@110 -- # VFO_ROOT_PATH=/tmp/sma/vfio-user/qemu
00:14:02.331   10:10:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@112 -- # '[' -e /tmp/sma/vfio-user/qemu ']'
00:14:02.331   10:10:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@113 -- # mkdir -p /tmp/sma/vfio-user/qemu
00:14:02.331   10:10:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@116 -- # used_vms=0
00:14:02.331   10:10:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@117 -- # vm_kill_all
00:14:02.331   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@476 -- # local vm
00:14:02.331    10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@477 -- # vm_list_all
00:14:02.331    10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@466 -- # vms=()
00:14:02.331    10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@466 -- # local vms
00:14:02.331    10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:14:02.331    10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:14:02.331    10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:14:02.331   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@477 -- # for vm in $(vm_list_all)
00:14:02.331   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@478 -- # vm_kill 1
00:14:02.331   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@442 -- # vm_num_is_valid 1
00:14:02.331   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:02.331   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:02.331   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@443 -- # local vm_dir=/root/vhost_test/vms/1
00:14:02.331   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@445 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:14:02.331   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@446 -- # return 0
00:14:02.331   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@481 -- # rm -rf /root/vhost_test/vms
00:14:02.331   10:10:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@119 -- # vm_setup --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disk-type=virtio --force=0 '--qemu-args=-qmp tcp:localhost:10005,server,nowait -device pci-bridge,chassis_nr=1,id=pci.spdk.0 -device pci-bridge,chassis_nr=2,id=pci.spdk.1'
00:14:02.331   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@518 -- # xtrace_disable
00:14:02.331   10:10:57 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:02.331  INFO: Creating new VM in /root/vhost_test/vms/0
00:14:02.331  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:14:02.331  INFO: TASK MASK: 1-2
00:14:02.331   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@671 -- # local node_num=0
00:14:02.331   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@672 -- # local boot_disk_present=false
00:14:02.331   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:14:02.331   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:14:02.331   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out
00:14:02.331   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false
00:14:02.331   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out=
00:14:02.331   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:02.331   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift
00:14:02.331   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:14:02.331  INFO: NUMA NODE: 0
00:14:02.331   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:14:02.331   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:14:02.331   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:14:02.331   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:14:02.331   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@677 -- # [[ -n '' ]]
00:14:02.331   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:14:02.332   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:14:02.332   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:14:02.332   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:14:02.332   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:14:02.332   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:14:02.332   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:14:02.332   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:14:02.332   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@686 -- # [[ -z '' ]]
00:14:02.332   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:14:02.332   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:14:02.332   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@691 -- # (( 0 == 0 ))
00:14:02.332   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@691 -- # [[ virtio == virtio* ]]
00:14:02.332   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@692 -- # disks=("default_virtio.img")
00:14:02.332   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:14:02.332   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@701 -- # IFS=,
00:14:02.332   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@701 -- # read -r disk disk_type _
00:14:02.332   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@702 -- # [[ -z '' ]]
00:14:02.332   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@702 -- # disk_type=virtio
00:14:02.332   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@704 -- # case $disk_type in
00:14:02.332   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@706 -- # local raw_name=RAWSCSI
00:14:02.332   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@707 -- # local raw_disk=/root/vhost_test/vms/0/test.img
00:14:02.332   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@710 -- # [[ -f default_virtio.img ]]
00:14:02.332   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@714 -- # notice 'Creating Virtio disc /root/vhost_test/vms/0/test.img'
00:14:02.332   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@94 -- # message INFO 'Creating Virtio disc /root/vhost_test/vms/0/test.img'
00:14:02.332   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out
00:14:02.332   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false
00:14:02.332   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out=
00:14:02.332   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:02.332   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift
00:14:02.332   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'INFO: Creating Virtio disc /root/vhost_test/vms/0/test.img'
00:14:02.332  INFO: Creating Virtio disc /root/vhost_test/vms/0/test.img
00:14:02.332   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@715 -- # dd if=/dev/zero of=/root/vhost_test/vms/0/test.img bs=1024k count=1024
00:14:02.898  1024+0 records in
00:14:02.898  1024+0 records out
00:14:02.898  1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.477386 s, 2.2 GB/s
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@718 -- # cmd+=(-device "virtio-scsi-pci,num_queues=$queue_number")
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@719 -- # cmd+=(-device "scsi-hd,drive=hd$i,vendor=$raw_name")
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@720 -- # cmd+=(-drive "if=none,id=hd$i,file=$raw_disk,format=raw$raw_cache")
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@780 -- # [[ -n '' ]]
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@785 -- # (( 1 ))
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@785 -- # cmd+=("${qemu_args[@]}")
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/0/run.sh'
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/0/run.sh'
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out=
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/0/run.sh'
00:14:02.898  INFO: Saving to /root/vhost_test/vms/0/run.sh
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@787 -- # cat
00:14:02.898    10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 1-2 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :100 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10002,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/0/qemu.pid -serial file:/root/vhost_test/vms/0/serial.log -D /root/vhost_test/vms/0/qemu.log -chardev file,path=/root/vhost_test/vms/0/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10000-:22,hostfwd=tcp::10001-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device virtio-scsi-pci,num_queues=2 -device scsi-hd,drive=hd,vendor=RAWSCSI -drive if=none,id=hd,file=/root/vhost_test/vms/0/test.img,format=raw '-qmp tcp:localhost:10005,server,nowait -device pci-bridge,chassis_nr=1,id=pci.spdk.0 -device pci-bridge,chassis_nr=2,id=pci.spdk.1'
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/0/run.sh
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@827 -- # echo 10000
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@828 -- # echo 10001
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@829 -- # echo 10002
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/0/migration_port
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@832 -- # [[ -z '' ]]
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@834 -- # echo 10004
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@835 -- # echo 100
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@837 -- # [[ -z '' ]]
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@838 -- # [[ -z '' ]]
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@124 -- # vm_run 0
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@843 -- # local run_all=false
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@844 -- # local vms_to_run=
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@846 -- # getopts a-: optchar
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@856 -- # false
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@859 -- # shift 0
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@860 -- # for vm in "$@"
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@861 -- # vm_num_is_valid 0
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/0/run.sh ]]
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@866 -- # vms_to_run+=' 0'
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@871 -- # vm_is_running 0
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@373 -- # return 1
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/0/run.sh'
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/0/run.sh'
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out=
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/0/run.sh'
00:14:02.898  INFO: running /root/vhost_test/vms/0/run.sh
00:14:02.898   10:10:57 sma.sma_vfiouser_qemu -- vhost/common.sh@877 -- # /root/vhost_test/vms/0/run.sh
00:14:02.898  Running VM in /root/vhost_test/vms/0
00:14:03.465  Waiting for QEMU pid file
00:14:04.399  === qemu.log ===
00:14:04.399  === qemu.log ===
00:14:04.399   10:10:59 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@125 -- # vm_wait_for_boot 300 0
00:14:04.399   10:10:59 sma.sma_vfiouser_qemu -- vhost/common.sh@913 -- # assert_number 300
00:14:04.399   10:10:59 sma.sma_vfiouser_qemu -- vhost/common.sh@281 -- # [[ 300 =~ [0-9]+ ]]
00:14:04.399   10:10:59 sma.sma_vfiouser_qemu -- vhost/common.sh@281 -- # return 0
00:14:04.399   10:10:59 sma.sma_vfiouser_qemu -- vhost/common.sh@915 -- # xtrace_disable
00:14:04.399   10:10:59 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:04.399  INFO: Waiting for VMs to boot
00:14:04.399  INFO: waiting for VM0 (/root/vhost_test/vms/0)
00:14:26.322  
00:14:26.322  INFO: VM0 ready
00:14:26.322  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:14:26.322  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:14:26.322  INFO: all VMs ready
00:14:26.322   10:11:21 sma.sma_vfiouser_qemu -- vhost/common.sh@973 -- # return 0
00:14:26.322   10:11:21 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@129 -- # tgtpid=1807214
00:14:26.322   10:11:21 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@128 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc
00:14:26.322   10:11:21 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@130 -- # waitforlisten 1807214
00:14:26.322   10:11:21 sma.sma_vfiouser_qemu -- common/autotest_common.sh@835 -- # '[' -z 1807214 ']'
00:14:26.322   10:11:21 sma.sma_vfiouser_qemu -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:14:26.322   10:11:21 sma.sma_vfiouser_qemu -- common/autotest_common.sh@840 -- # local max_retries=100
00:14:26.322   10:11:21 sma.sma_vfiouser_qemu -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:14:26.322  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:14:26.322   10:11:21 sma.sma_vfiouser_qemu -- common/autotest_common.sh@844 -- # xtrace_disable
00:14:26.322   10:11:21 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:26.580  [2024-11-20 10:11:21.490153] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:14:26.580  [2024-11-20 10:11:21.490319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1807214 ]
00:14:26.580  EAL: No free 2048 kB hugepages reported on node 1
00:14:26.580  [2024-11-20 10:11:21.621032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:26.839  [2024-11-20 10:11:21.737126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:14:27.405   10:11:22 sma.sma_vfiouser_qemu -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:14:27.405   10:11:22 sma.sma_vfiouser_qemu -- common/autotest_common.sh@868 -- # return 0
00:14:27.405   10:11:22 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@133 -- # rpc_cmd dpdk_cryptodev_scan_accel_module
00:14:27.405   10:11:22 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:27.405   10:11:22 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:27.405   10:11:22 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:27.405   10:11:22 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@134 -- # rpc_cmd dpdk_cryptodev_set_driver -d crypto_aesni_mb
00:14:27.405   10:11:22 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:27.405   10:11:22 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:27.405  [2024-11-20 10:11:22.483891] accel_dpdk_cryptodev.c: 224:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb
00:14:27.405   10:11:22 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:27.406   10:11:22 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@135 -- # rpc_cmd accel_assign_opc -o encrypt -m dpdk_cryptodev
00:14:27.406   10:11:22 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:27.406   10:11:22 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:27.406  [2024-11-20 10:11:22.491891] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev
00:14:27.406   10:11:22 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:27.406   10:11:22 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@136 -- # rpc_cmd accel_assign_opc -o decrypt -m dpdk_cryptodev
00:14:27.406   10:11:22 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:27.406   10:11:22 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:27.406  [2024-11-20 10:11:22.499912] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev
00:14:27.406   10:11:22 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:27.406   10:11:22 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@137 -- # rpc_cmd framework_start_init
00:14:27.406   10:11:22 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:27.406   10:11:22 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:27.664  [2024-11-20 10:11:22.748567] accel_dpdk_cryptodev.c:1179:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 1
00:14:28.596   10:11:23 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:28.596   10:11:23 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@140 -- # rpc_cmd bdev_null_create null0 100 4096
00:14:28.596   10:11:23 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:28.596   10:11:23 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:28.596  null0
00:14:28.596   10:11:23 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:28.596   10:11:23 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@141 -- # rpc_cmd bdev_null_create null1 100 4096
00:14:28.596   10:11:23 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:28.596   10:11:23 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:28.596  null1
00:14:28.596   10:11:23 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:28.596   10:11:23 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@160 -- # smapid=1807485
00:14:28.596   10:11:23 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@163 -- # sma_waitforlisten
00:14:28.596   10:11:23 sma.sma_vfiouser_qemu -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:14:28.596   10:11:23 sma.sma_vfiouser_qemu -- sma/common.sh@8 -- # local sma_port=8080
00:14:28.596   10:11:23 sma.sma_vfiouser_qemu -- sma/common.sh@10 -- # (( i = 0 ))
00:14:28.596   10:11:23 sma.sma_vfiouser_qemu -- sma/common.sh@10 -- # (( i < 5 ))
00:14:28.596   10:11:23 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@144 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:14:28.596   10:11:23 sma.sma_vfiouser_qemu -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:14:28.596    10:11:23 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@144 -- # cat
00:14:28.596   10:11:23 sma.sma_vfiouser_qemu -- sma/common.sh@14 -- # sleep 1s
00:14:28.596  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:28.596  I0000 00:00:1732093883.653145 1807485 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:29.528   10:11:24 sma.sma_vfiouser_qemu -- sma/common.sh@10 -- # (( i++ ))
00:14:29.528   10:11:24 sma.sma_vfiouser_qemu -- sma/common.sh@10 -- # (( i < 5 ))
00:14:29.528   10:11:24 sma.sma_vfiouser_qemu -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:14:29.528   10:11:24 sma.sma_vfiouser_qemu -- sma/common.sh@12 -- # return 0
00:14:29.528   10:11:24 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@166 -- # rpc_cmd nvmf_get_transports --trtype VFIOUSER
00:14:29.528   10:11:24 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:29.528   10:11:24 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:29.528  [
00:14:29.528  {
00:14:29.528  "trtype": "VFIOUSER",
00:14:29.528  "max_queue_depth": 256,
00:14:29.528  "max_io_qpairs_per_ctrlr": 127,
00:14:29.528  "in_capsule_data_size": 0,
00:14:29.528  "max_io_size": 131072,
00:14:29.528  "io_unit_size": 131072,
00:14:29.528  "max_aq_depth": 32,
00:14:29.528  "num_shared_buffers": 0,
00:14:29.528  "buf_cache_size": 0,
00:14:29.528  "dif_insert_or_strip": false,
00:14:29.528  "zcopy": false,
00:14:29.528  "abort_timeout_sec": 0,
00:14:29.528  "ack_timeout": 0,
00:14:29.528  "data_wr_pool_size": 0
00:14:29.528  }
00:14:29.528  ]
00:14:29.528   10:11:24 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:29.528   10:11:24 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@169 -- # vm_exec 0 '[[ ! -e /sys/class/nvme-subsystem/nvme-subsys0 ]]'
00:14:29.528   10:11:24 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:14:29.528   10:11:24 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:29.528   10:11:24 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:29.528   10:11:24 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:14:29.528   10:11:24 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:14:29.528    10:11:24 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:14:29.528    10:11:24 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:14:29.528    10:11:24 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:29.528    10:11:24 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:29.528    10:11:24 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:14:29.528    10:11:24 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:14:29.528   10:11:24 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 '[[ ! -e /sys/class/nvme-subsystem/nvme-subsys0 ]]'
00:14:29.528  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:14:29.528    10:11:24 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@172 -- # create_device 0 0
00:14:29.528    10:11:24 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@172 -- # jq -r .handle
00:14:29.528    10:11:24 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=0
00:14:29.528    10:11:24 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:14:29.528    10:11:24 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:29.784  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:29.784  I0000 00:00:1732093884.840771 1807654 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:29.784  I0000 00:00:1732093884.842676 1807654 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:29.784  [2024-11-20 10:11:24.848321] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist
00:14:30.043   10:11:25 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@172 -- # device0=nvme:nqn.2016-06.io.spdk:vfiouser-0
00:14:30.043   10:11:25 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@173 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:14:30.043   10:11:25 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:30.043   10:11:25 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:30.043  [
00:14:30.043  {
00:14:30.043  "nqn": "nqn.2016-06.io.spdk:vfiouser-0",
00:14:30.043  "subtype": "NVMe",
00:14:30.043  "listen_addresses": [
00:14:30.043  {
00:14:30.043  "trtype": "VFIOUSER",
00:14:30.043  "adrfam": "IPv4",
00:14:30.043  "traddr": "/var/tmp/vfiouser-0",
00:14:30.043  "trsvcid": ""
00:14:30.043  }
00:14:30.043  ],
00:14:30.043  "allow_any_host": true,
00:14:30.043  "hosts": [],
00:14:30.043  "serial_number": "00000000000000000000",
00:14:30.043  "model_number": "SPDK bdev Controller",
00:14:30.043  "max_namespaces": 32,
00:14:30.043  "min_cntlid": 1,
00:14:30.043  "max_cntlid": 65519,
00:14:30.043  "namespaces": []
00:14:30.043  }
00:14:30.043  ]
00:14:30.043   10:11:25 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:30.043   10:11:25 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@174 -- # vm_check_subsys_nqn 0 nqn.2016-06.io.spdk:vfiouser-0
00:14:30.043   10:11:25 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@89 -- # sleep 1
00:14:30.043  [2024-11-20 10:11:25.156649] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-0: enabling controller
00:14:30.975    10:11:26 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:14:30.975    10:11:26 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:14:30.975    10:11:26 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:30.975    10:11:26 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:30.975    10:11:26 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:14:30.975    10:11:26 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:14:30.975     10:11:26 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:14:30.975     10:11:26 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:14:30.975     10:11:26 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:30.975     10:11:26 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:30.975     10:11:26 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:14:30.975     10:11:26 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:14:30.975    10:11:26 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:14:30.975  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:14:31.233   10:11:26 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # nqn=/sys/class/nvme/nvme0/subsysnqn
00:14:31.233   10:11:26 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@91 -- # [[ -z /sys/class/nvme/nvme0/subsysnqn ]]
00:14:31.233    10:11:26 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@177 -- # rpc_cmd nvmf_get_subsystems
00:14:31.233    10:11:26 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@177 -- # jq -r '. | length'
00:14:31.233    10:11:26 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:31.233    10:11:26 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:31.233    10:11:26 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:31.233   10:11:26 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@177 -- # [[ 2 -eq 2 ]]
00:14:31.233    10:11:26 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@179 -- # create_device 1 0
00:14:31.233    10:11:26 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@179 -- # jq -r .handle
00:14:31.233    10:11:26 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=1
00:14:31.233    10:11:26 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:14:31.233    10:11:26 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:31.491  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:31.491  I0000 00:00:1732093886.450732 1807832 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:31.491  I0000 00:00:1732093886.452582 1807832 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:31.491  [2024-11-20 10:11:26.457318] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-1' does not exist
00:14:31.749   10:11:26 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@179 -- # device1=nvme:nqn.2016-06.io.spdk:vfiouser-1
00:14:31.749   10:11:26 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@180 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:14:31.749   10:11:26 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:31.749   10:11:26 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:31.749  [
00:14:31.749  {
00:14:31.749  "nqn": "nqn.2016-06.io.spdk:vfiouser-0",
00:14:31.749  "subtype": "NVMe",
00:14:31.749  "listen_addresses": [
00:14:31.749  {
00:14:31.749  "trtype": "VFIOUSER",
00:14:31.749  "adrfam": "IPv4",
00:14:31.749  "traddr": "/var/tmp/vfiouser-0",
00:14:31.749  "trsvcid": ""
00:14:31.749  }
00:14:31.749  ],
00:14:31.749  "allow_any_host": true,
00:14:31.749  "hosts": [],
00:14:31.749  "serial_number": "00000000000000000000",
00:14:31.749  "model_number": "SPDK bdev Controller",
00:14:31.749  "max_namespaces": 32,
00:14:31.749  "min_cntlid": 1,
00:14:31.749  "max_cntlid": 65519,
00:14:31.749  "namespaces": []
00:14:31.749  }
00:14:31.749  ]
00:14:31.749   10:11:26 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:31.749   10:11:26 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@181 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:14:31.749   10:11:26 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:31.749   10:11:26 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:31.749  [
00:14:31.749  {
00:14:31.749  "nqn": "nqn.2016-06.io.spdk:vfiouser-1",
00:14:31.749  "subtype": "NVMe",
00:14:31.749  "listen_addresses": [
00:14:31.749  {
00:14:31.749  "trtype": "VFIOUSER",
00:14:31.749  "adrfam": "IPv4",
00:14:31.749  "traddr": "/var/tmp/vfiouser-1",
00:14:31.749  "trsvcid": ""
00:14:31.749  }
00:14:31.749  ],
00:14:31.749  "allow_any_host": true,
00:14:31.749  "hosts": [],
00:14:31.749  "serial_number": "00000000000000000000",
00:14:31.749  "model_number": "SPDK bdev Controller",
00:14:31.749  "max_namespaces": 32,
00:14:31.749  "min_cntlid": 1,
00:14:31.749  "max_cntlid": 65519,
00:14:31.749  "namespaces": []
00:14:31.749  }
00:14:31.749  ]
00:14:31.749   10:11:26 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:31.749   10:11:26 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@182 -- # [[ nvme:nqn.2016-06.io.spdk:vfiouser-0 != \n\v\m\e\:\n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\v\f\i\o\u\s\e\r\-\1 ]]
00:14:31.749   10:11:26 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@183 -- # vm_check_subsys_nqn 0 nqn.2016-06.io.spdk:vfiouser-1
00:14:31.749   10:11:26 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@89 -- # sleep 1
00:14:31.749  [2024-11-20 10:11:26.711481] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-1: enabling controller
00:14:32.682    10:11:27 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:14:32.682    10:11:27 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:14:32.682    10:11:27 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:32.682    10:11:27 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:32.682    10:11:27 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:14:32.682    10:11:27 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:14:32.683     10:11:27 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:14:32.683     10:11:27 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:14:32.683     10:11:27 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:32.683     10:11:27 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:32.683     10:11:27 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:14:32.683     10:11:27 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:14:32.683    10:11:27 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:14:32.683  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:14:32.683   10:11:27 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # nqn=/sys/class/nvme/nvme1/subsysnqn
00:14:32.683   10:11:27 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@91 -- # [[ -z /sys/class/nvme/nvme1/subsysnqn ]]
00:14:32.683    10:11:27 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@186 -- # rpc_cmd nvmf_get_subsystems
00:14:32.683    10:11:27 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@186 -- # jq -r '. | length'
00:14:32.683    10:11:27 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:32.939    10:11:27 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:32.939    10:11:27 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:32.939   10:11:27 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@186 -- # [[ 3 -eq 3 ]]
00:14:32.939    10:11:27 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@190 -- # create_device 0 0
00:14:32.939    10:11:27 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@190 -- # jq -r .handle
00:14:32.939    10:11:27 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=0
00:14:32.939    10:11:27 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:14:32.939    10:11:27 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:33.197  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:33.197  I0000 00:00:1732093888.083085 1808126 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:33.197  I0000 00:00:1732093888.084829 1808126 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:33.197   10:11:28 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@190 -- # tmp0=nvme:nqn.2016-06.io.spdk:vfiouser-0
00:14:33.197    10:11:28 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@191 -- # create_device 1 0
00:14:33.197    10:11:28 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=1
00:14:33.197    10:11:28 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@191 -- # jq -r .handle
00:14:33.197    10:11:28 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:14:33.197    10:11:28 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:33.455  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:33.455  I0000 00:00:1732093888.367249 1808158 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:33.455  I0000 00:00:1732093888.369242 1808158 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:33.455   10:11:28 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@191 -- # tmp1=nvme:nqn.2016-06.io.spdk:vfiouser-1
00:14:33.455    10:11:28 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@193 -- # vm_count_nvme 0
00:14:33.455    10:11:28 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@68 -- # vm_exec 0 'grep -sl SPDK /sys/class/nvme/*/model || true'
00:14:33.455    10:11:28 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:14:33.455    10:11:28 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:33.455    10:11:28 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:33.455    10:11:28 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@68 -- # wc -l
00:14:33.455    10:11:28 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:14:33.455    10:11:28 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:14:33.455     10:11:28 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:14:33.455     10:11:28 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:14:33.455     10:11:28 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:33.455     10:11:28 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:33.455     10:11:28 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:14:33.455     10:11:28 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:14:33.455    10:11:28 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -sl SPDK /sys/class/nvme/*/model || true'
00:14:33.455  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:14:33.455   10:11:28 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@193 -- # [[ 2 -eq 2 ]]
00:14:33.455    10:11:28 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@195 -- # rpc_cmd nvmf_get_subsystems
00:14:33.455    10:11:28 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@195 -- # jq -r '. | length'
00:14:33.455    10:11:28 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:33.455    10:11:28 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:33.712    10:11:28 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:33.712   10:11:28 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@195 -- # [[ 3 -eq 3 ]]
00:14:33.712   10:11:28 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@196 -- # [[ nvme:nqn.2016-06.io.spdk:vfiouser-0 == \n\v\m\e\:\n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\v\f\i\o\u\s\e\r\-\0 ]]
00:14:33.712   10:11:28 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@197 -- # [[ nvme:nqn.2016-06.io.spdk:vfiouser-1 == \n\v\m\e\:\n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\v\f\i\o\u\s\e\r\-\1 ]]
00:14:33.712   10:11:28 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@200 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-0
00:14:33.712   10:11:28 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:33.970  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:33.970  I0000 00:00:1732093888.870575 1808190 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:33.970  I0000 00:00:1732093888.872494 1808190 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:33.970  {}
00:14:33.970   10:11:28 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@201 -- # NOT rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:14:33.970   10:11:28 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:14:33.970   10:11:28 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:14:33.970   10:11:28 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:14:33.970   10:11:28 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:33.970    10:11:28 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:14:33.970   10:11:28 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:33.970   10:11:28 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:14:33.970   10:11:28 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:33.970   10:11:28 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:33.970  [2024-11-20 10:11:28.926045] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist
00:14:33.970  request:
00:14:33.970  {
00:14:33.970  "nqn": "nqn.2016-06.io.spdk:vfiouser-0",
00:14:33.970  "method": "nvmf_get_subsystems",
00:14:33.970  "req_id": 1
00:14:33.970  }
00:14:33.970  Got JSON-RPC error response
00:14:33.970  response:
00:14:33.970  {
00:14:33.970  "code": -19,
00:14:33.970  "message": "No such device"
00:14:33.970  }
00:14:33.970   10:11:28 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:14:33.970   10:11:28 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:14:33.970   10:11:28 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:14:33.970   10:11:28 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:14:33.970   10:11:28 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:14:33.970   10:11:28 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@202 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:14:33.970   10:11:28 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:33.970   10:11:28 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:33.970  [
00:14:33.970  {
00:14:33.970  "nqn": "nqn.2016-06.io.spdk:vfiouser-1",
00:14:33.970  "subtype": "NVMe",
00:14:33.970  "listen_addresses": [
00:14:33.970  {
00:14:33.970  "trtype": "VFIOUSER",
00:14:33.970  "adrfam": "IPv4",
00:14:33.970  "traddr": "/var/tmp/vfiouser-1",
00:14:33.970  "trsvcid": ""
00:14:33.970  }
00:14:33.970  ],
00:14:33.970  "allow_any_host": true,
00:14:33.970  "hosts": [],
00:14:33.970  "serial_number": "00000000000000000000",
00:14:33.970  "model_number": "SPDK bdev Controller",
00:14:33.970  "max_namespaces": 32,
00:14:33.970  "min_cntlid": 1,
00:14:33.970  "max_cntlid": 65519,
00:14:33.970  "namespaces": []
00:14:33.970  }
00:14:33.970  ]
00:14:33.970   10:11:28 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:33.970    10:11:28 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@203 -- # rpc_cmd nvmf_get_subsystems
00:14:33.970    10:11:28 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:33.970    10:11:28 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@203 -- # jq -r '. | length'
00:14:33.970    10:11:28 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:33.970    10:11:28 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:33.970   10:11:28 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@203 -- # [[ 2 -eq 2 ]]
00:14:33.970    10:11:28 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@204 -- # vm_count_nvme 0
00:14:33.970    10:11:28 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@68 -- # vm_exec 0 'grep -sl SPDK /sys/class/nvme/*/model || true'
00:14:33.970    10:11:28 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@68 -- # wc -l
00:14:33.970    10:11:28 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:14:33.970    10:11:28 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:33.970    10:11:28 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:33.970    10:11:28 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:14:33.970    10:11:28 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:14:33.970     10:11:28 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:14:33.970     10:11:28 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:14:33.970     10:11:28 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:33.970     10:11:28 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:33.970     10:11:28 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:14:33.970     10:11:28 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:14:33.971    10:11:28 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -sl SPDK /sys/class/nvme/*/model || true'
00:14:33.971  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:14:34.228   10:11:29 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@204 -- # [[ 1 -eq 1 ]]
00:14:34.228   10:11:29 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@206 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-1
00:14:34.228   10:11:29 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:34.487  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:34.487  I0000 00:00:1732093889.390176 1808351 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:34.487  I0000 00:00:1732093889.391965 1808351 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:34.487  {}
00:14:34.487   10:11:29 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@207 -- # NOT rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:14:34.487   10:11:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:14:34.487   10:11:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:14:34.487   10:11:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:14:34.487   10:11:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:34.487    10:11:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:14:34.487   10:11:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:34.487   10:11:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:14:34.487   10:11:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:34.487   10:11:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:34.487  [2024-11-20 10:11:29.439675] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist
00:14:34.487  request:
00:14:34.487  {
00:14:34.487  "nqn": "nqn.2016-06.io.spdk:vfiouser-0",
00:14:34.487  "method": "nvmf_get_subsystems",
00:14:34.487  "req_id": 1
00:14:34.487  }
00:14:34.487  Got JSON-RPC error response
00:14:34.487  response:
00:14:34.487  {
00:14:34.487  "code": -19,
00:14:34.487  "message": "No such device"
00:14:34.487  }
00:14:34.487   10:11:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:14:34.487   10:11:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:14:34.487   10:11:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:14:34.487   10:11:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:14:34.487   10:11:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:14:34.487   10:11:29 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@208 -- # NOT rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:14:34.487   10:11:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:14:34.487   10:11:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:14:34.487   10:11:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:14:34.487   10:11:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:34.487    10:11:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:14:34.487   10:11:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:34.487   10:11:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:14:34.487   10:11:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:34.487   10:11:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:34.487  [2024-11-20 10:11:29.451717] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-1' does not exist
00:14:34.487  request:
00:14:34.487  {
00:14:34.487  "nqn": "nqn.2016-06.io.spdk:vfiouser-1",
00:14:34.488  "method": "nvmf_get_subsystems",
00:14:34.488  "req_id": 1
00:14:34.488  }
00:14:34.488  Got JSON-RPC error response
00:14:34.488  response:
00:14:34.488  {
00:14:34.488  "code": -19,
00:14:34.488  "message": "No such device"
00:14:34.488  }
00:14:34.488   10:11:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:14:34.488   10:11:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:14:34.488   10:11:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:14:34.488   10:11:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:14:34.488   10:11:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:14:34.488    10:11:29 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@209 -- # rpc_cmd nvmf_get_subsystems
00:14:34.488    10:11:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:34.488    10:11:29 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@209 -- # jq -r '. | length'
00:14:34.488    10:11:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:34.488    10:11:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:34.488   10:11:29 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@209 -- # [[ 1 -eq 1 ]]
00:14:34.488    10:11:29 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@210 -- # vm_count_nvme 0
00:14:34.488    10:11:29 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@68 -- # vm_exec 0 'grep -sl SPDK /sys/class/nvme/*/model || true'
00:14:34.488    10:11:29 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:14:34.488    10:11:29 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@68 -- # wc -l
00:14:34.488    10:11:29 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:34.488    10:11:29 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:34.488    10:11:29 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:14:34.488    10:11:29 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:14:34.488     10:11:29 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:14:34.488     10:11:29 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:14:34.488     10:11:29 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:34.488     10:11:29 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:34.488     10:11:29 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:14:34.488     10:11:29 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:14:34.488    10:11:29 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -sl SPDK /sys/class/nvme/*/model || true'
00:14:34.488  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:14:34.746   10:11:29 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@210 -- # [[ 0 -eq 0 ]]
00:14:34.746   10:11:29 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@213 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-0
00:14:34.746   10:11:29 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:35.004  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:35.004  I0000 00:00:1732093889.937069 1808388 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:35.004  I0000 00:00:1732093889.938901 1808388 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:35.004  [2024-11-20 10:11:29.945258] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist
00:14:35.004  {}
00:14:35.004   10:11:29 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@214 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-1
00:14:35.004   10:11:29 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:35.262  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:35.262  I0000 00:00:1732093890.208573 1808532 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:35.262  I0000 00:00:1732093890.210316 1808532 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:35.262  [2024-11-20 10:11:30.214086] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-1' does not exist
00:14:35.262  {}
00:14:35.262    10:11:30 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@217 -- # create_device 0 0
00:14:35.262    10:11:30 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@217 -- # jq -r .handle
00:14:35.262    10:11:30 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=0
00:14:35.262    10:11:30 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:14:35.262    10:11:30 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:35.520  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:35.520  I0000 00:00:1732093890.464856 1808563 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:35.520  I0000 00:00:1732093890.466763 1808563 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:35.520  [2024-11-20 10:11:30.470747] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist
00:14:35.520   10:11:30 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@217 -- # device0=nvme:nqn.2016-06.io.spdk:vfiouser-0
00:14:35.778    10:11:30 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@218 -- # create_device 1 0
00:14:35.778    10:11:30 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@218 -- # jq -r .handle
00:14:35.778    10:11:30 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=1
00:14:35.778    10:11:30 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:14:35.778    10:11:30 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:35.778  [2024-11-20 10:11:30.735693] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-0: enabling controller
00:14:35.778  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:35.778  I0000 00:00:1732093890.869750 1808587 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:35.778  I0000 00:00:1732093890.871766 1808587 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:35.778  [2024-11-20 10:11:30.876411] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-1' does not exist
00:14:36.036   10:11:31 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@218 -- # device1=nvme:nqn.2016-06.io.spdk:vfiouser-1
00:14:36.036    10:11:31 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@219 -- # rpc_cmd bdev_get_bdevs -b null0
00:14:36.036    10:11:31 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@219 -- # jq -r '.[].uuid'
00:14:36.036    10:11:31 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:36.036    10:11:31 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:36.036    10:11:31 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:36.036   10:11:31 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@219 -- # uuid0=23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:36.036    10:11:31 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@220 -- # rpc_cmd bdev_get_bdevs -b null1
00:14:36.036    10:11:31 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:36.036    10:11:31 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@220 -- # jq -r '.[].uuid'
00:14:36.036    10:11:31 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:36.036    10:11:31 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:36.036   10:11:31 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@220 -- # uuid1=ec6c2f06-4131-4996-abbd-9a4dc16b99f9
00:14:36.036   10:11:31 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@223 -- # attach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:36.036   10:11:31 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:36.036    10:11:31 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # uuid2base64 23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:36.036    10:11:31 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:14:36.294  [2024-11-20 10:11:31.160645] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-1: enabling controller
00:14:36.294  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:36.294  I0000 00:00:1732093891.386067 1808742 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:36.294  I0000 00:00:1732093891.388149 1808742 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:36.552  {}
00:14:36.552    10:11:31 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@224 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:14:36.552    10:11:31 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:36.552    10:11:31 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@224 -- # jq -r '.[0].namespaces | length'
00:14:36.552    10:11:31 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:36.552    10:11:31 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:36.552   10:11:31 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@224 -- # [[ 1 -eq 1 ]]
00:14:36.552    10:11:31 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@225 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:14:36.552    10:11:31 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:36.552    10:11:31 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@225 -- # jq -r '.[0].namespaces | length'
00:14:36.552    10:11:31 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:36.552    10:11:31 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:36.552   10:11:31 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@225 -- # [[ 0 -eq 0 ]]
00:14:36.552    10:11:31 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@226 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:14:36.552    10:11:31 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@226 -- # jq -r '.[0].namespaces[0].uuid'
00:14:36.552    10:11:31 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:36.552    10:11:31 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:36.552    10:11:31 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:36.552   10:11:31 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@226 -- # [[ 23ff93a2-6fde-435f-94f1-e70641ce1d2f == \2\3\f\f\9\3\a\2\-\6\f\d\e\-\4\3\5\f\-\9\4\f\1\-\e\7\0\6\4\1\c\e\1\d\2\f ]]
00:14:36.552   10:11:31 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@227 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:36.552   10:11:31 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:14:36.552   10:11:31 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-0
00:14:36.553   10:11:31 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:36.553    10:11:31 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:14:36.553    10:11:31 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:14:36.553    10:11:31 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:14:36.553    10:11:31 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:36.553    10:11:31 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:36.553    10:11:31 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:14:36.553    10:11:31 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:14:36.553     10:11:31 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:14:36.553     10:11:31 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:14:36.553     10:11:31 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:36.553     10:11:31 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:36.553     10:11:31 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:14:36.553     10:11:31 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:14:36.553    10:11:31 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:14:36.553  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:14:36.811   10:11:31 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme0
00:14:36.812   10:11:31 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme0 ]]
00:14:36.812    10:11:31 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 23ff93a2-6fde-435f-94f1-e70641ce1d2f /sys/class/nvme/nvme0/nvme*/uuid'
00:14:36.812    10:11:31 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:14:36.812    10:11:31 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:36.812    10:11:31 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:36.812    10:11:31 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:14:36.812    10:11:31 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:14:36.812     10:11:31 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:14:36.812     10:11:31 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:14:36.812     10:11:31 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:36.812     10:11:31 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:36.812     10:11:31 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:14:36.812     10:11:31 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:14:36.812    10:11:31 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 23ff93a2-6fde-435f-94f1-e70641ce1d2f /sys/class/nvme/nvme0/nvme*/uuid'
00:14:36.812  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:14:36.812   10:11:31 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=/sys/class/nvme/nvme0/nvme0c0n1/uuid
00:14:36.812   10:11:31 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z /sys/class/nvme/nvme0/nvme0c0n1/uuid ]]
00:14:36.812   10:11:31 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@229 -- # attach_volume nvme:nqn.2016-06.io.spdk:vfiouser-1 ec6c2f06-4131-4996-abbd-9a4dc16b99f9
00:14:36.812   10:11:31 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:36.812    10:11:31 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # uuid2base64 ec6c2f06-4131-4996-abbd-9a4dc16b99f9
00:14:36.812    10:11:31 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:14:37.378  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:37.378  I0000 00:00:1732093892.204319 1808794 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:37.378  I0000 00:00:1732093892.206205 1808794 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:37.378  {}
00:14:37.378    10:11:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@230 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:14:37.378    10:11:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@230 -- # jq -r '.[0].namespaces | length'
00:14:37.378    10:11:32 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:37.378    10:11:32 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:37.378    10:11:32 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:37.378   10:11:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@230 -- # [[ 1 -eq 1 ]]
00:14:37.378    10:11:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@231 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:14:37.378    10:11:32 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:37.378    10:11:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@231 -- # jq -r '.[0].namespaces | length'
00:14:37.378    10:11:32 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:37.378    10:11:32 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:37.378   10:11:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@231 -- # [[ 1 -eq 1 ]]
00:14:37.378    10:11:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@232 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:14:37.378    10:11:32 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:37.378    10:11:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@232 -- # jq -r '.[0].namespaces[0].uuid'
00:14:37.378    10:11:32 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:37.378    10:11:32 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:37.378   10:11:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@232 -- # [[ 23ff93a2-6fde-435f-94f1-e70641ce1d2f == \2\3\f\f\9\3\a\2\-\6\f\d\e\-\4\3\5\f\-\9\4\f\1\-\e\7\0\6\4\1\c\e\1\d\2\f ]]
00:14:37.378    10:11:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@233 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:14:37.378    10:11:32 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:37.378    10:11:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@233 -- # jq -r '.[0].namespaces[0].uuid'
00:14:37.378    10:11:32 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:37.378    10:11:32 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:37.378   10:11:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@233 -- # [[ ec6c2f06-4131-4996-abbd-9a4dc16b99f9 == \e\c\6\c\2\f\0\6\-\4\1\3\1\-\4\9\9\6\-\a\b\b\d\-\9\a\4\d\c\1\6\b\9\9\f\9 ]]
00:14:37.378   10:11:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@234 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 ec6c2f06-4131-4996-abbd-9a4dc16b99f9
00:14:37.378   10:11:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:14:37.378   10:11:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-1
00:14:37.378   10:11:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=ec6c2f06-4131-4996-abbd-9a4dc16b99f9
00:14:37.378    10:11:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:14:37.378    10:11:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:14:37.378    10:11:32 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:14:37.378    10:11:32 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:37.378    10:11:32 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:37.378    10:11:32 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:14:37.378    10:11:32 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:14:37.378     10:11:32 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:14:37.378     10:11:32 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:14:37.378     10:11:32 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:37.378     10:11:32 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:37.378     10:11:32 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:14:37.378     10:11:32 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:14:37.378    10:11:32 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:14:37.378  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:14:37.637   10:11:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme1
00:14:37.637   10:11:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme1 ]]
00:14:37.637    10:11:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l ec6c2f06-4131-4996-abbd-9a4dc16b99f9 /sys/class/nvme/nvme1/nvme*/uuid'
00:14:37.637    10:11:32 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:14:37.637    10:11:32 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:37.637    10:11:32 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:37.637    10:11:32 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:14:37.637    10:11:32 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:14:37.637     10:11:32 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:14:37.637     10:11:32 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:14:37.637     10:11:32 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:37.637     10:11:32 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:37.637     10:11:32 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:14:37.637     10:11:32 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:14:37.637    10:11:32 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l ec6c2f06-4131-4996-abbd-9a4dc16b99f9 /sys/class/nvme/nvme1/nvme*/uuid'
00:14:37.637  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:14:37.637   10:11:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=/sys/class/nvme/nvme1/nvme1c1n1/uuid
00:14:37.637   10:11:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z /sys/class/nvme/nvme1/nvme1c1n1/uuid ]]
00:14:37.637   10:11:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@237 -- # attach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:37.637   10:11:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:37.637    10:11:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # uuid2base64 23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:37.637    10:11:32 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:14:38.203  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:38.203  I0000 00:00:1732093893.083752 1808975 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:38.203  I0000 00:00:1732093893.085836 1808975 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:38.203  {}
00:14:38.203   10:11:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@238 -- # attach_volume nvme:nqn.2016-06.io.spdk:vfiouser-1 ec6c2f06-4131-4996-abbd-9a4dc16b99f9
00:14:38.203   10:11:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:38.203    10:11:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # uuid2base64 ec6c2f06-4131-4996-abbd-9a4dc16b99f9
00:14:38.203    10:11:33 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:14:38.461  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:38.461  I0000 00:00:1732093893.431477 1809008 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:38.461  I0000 00:00:1732093893.433484 1809008 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:38.461  {}
00:14:38.461    10:11:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@239 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:14:38.461    10:11:33 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:38.461    10:11:33 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:38.461    10:11:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@239 -- # jq -r '.[0].namespaces | length'
00:14:38.461    10:11:33 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:38.461   10:11:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@239 -- # [[ 1 -eq 1 ]]
00:14:38.461    10:11:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@240 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:14:38.461    10:11:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@240 -- # jq -r '.[0].namespaces | length'
00:14:38.461    10:11:33 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:38.461    10:11:33 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:38.461    10:11:33 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:38.461   10:11:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@240 -- # [[ 1 -eq 1 ]]
00:14:38.461    10:11:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@241 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:14:38.461    10:11:33 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:38.461    10:11:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@241 -- # jq -r '.[0].namespaces[0].uuid'
00:14:38.461    10:11:33 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:38.721    10:11:33 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:38.721   10:11:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@241 -- # [[ 23ff93a2-6fde-435f-94f1-e70641ce1d2f == \2\3\f\f\9\3\a\2\-\6\f\d\e\-\4\3\5\f\-\9\4\f\1\-\e\7\0\6\4\1\c\e\1\d\2\f ]]
00:14:38.721    10:11:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@242 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:14:38.721    10:11:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@242 -- # jq -r '.[0].namespaces[0].uuid'
00:14:38.721    10:11:33 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:38.721    10:11:33 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:38.721    10:11:33 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:38.721   10:11:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@242 -- # [[ ec6c2f06-4131-4996-abbd-9a4dc16b99f9 == \e\c\6\c\2\f\0\6\-\4\1\3\1\-\4\9\9\6\-\a\b\b\d\-\9\a\4\d\c\1\6\b\9\9\f\9 ]]
00:14:38.721   10:11:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@243 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:38.721   10:11:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:14:38.721   10:11:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-0
00:14:38.721   10:11:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:38.721    10:11:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:14:38.721    10:11:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:14:38.721    10:11:33 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:14:38.721    10:11:33 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:38.721    10:11:33 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:38.721    10:11:33 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:14:38.721    10:11:33 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:14:38.721     10:11:33 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:14:38.721     10:11:33 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:14:38.721     10:11:33 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:38.721     10:11:33 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:38.721     10:11:33 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:14:38.721     10:11:33 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:14:38.721    10:11:33 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:14:38.721  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:14:38.721   10:11:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme0
00:14:38.721   10:11:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme0 ]]
00:14:38.721    10:11:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 23ff93a2-6fde-435f-94f1-e70641ce1d2f /sys/class/nvme/nvme0/nvme*/uuid'
00:14:38.721    10:11:33 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:14:38.721    10:11:33 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:38.721    10:11:33 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:38.721    10:11:33 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:14:38.721    10:11:33 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:14:38.721     10:11:33 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:14:38.721     10:11:33 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:14:38.721     10:11:33 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:38.721     10:11:33 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:38.721     10:11:33 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:14:38.721     10:11:33 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:14:38.721    10:11:33 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 23ff93a2-6fde-435f-94f1-e70641ce1d2f /sys/class/nvme/nvme0/nvme*/uuid'
00:14:38.980  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:14:38.980   10:11:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=/sys/class/nvme/nvme0/nvme0c0n1/uuid
00:14:38.980   10:11:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z /sys/class/nvme/nvme0/nvme0c0n1/uuid ]]
00:14:38.980   10:11:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@244 -- # NOT vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 ec6c2f06-4131-4996-abbd-9a4dc16b99f9
00:14:38.980   10:11:33 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:14:38.980   10:11:33 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 ec6c2f06-4131-4996-abbd-9a4dc16b99f9
00:14:38.980   10:11:33 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=vm_check_subsys_volume
00:14:38.980   10:11:33 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:38.980    10:11:33 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t vm_check_subsys_volume
00:14:38.980   10:11:33 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:38.980   10:11:33 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 ec6c2f06-4131-4996-abbd-9a4dc16b99f9
00:14:38.980   10:11:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:14:38.980   10:11:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-0
00:14:38.980   10:11:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=ec6c2f06-4131-4996-abbd-9a4dc16b99f9
00:14:38.980    10:11:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:14:38.980    10:11:33 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:14:38.980    10:11:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:14:38.980    10:11:33 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:38.980    10:11:33 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:38.980    10:11:33 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:14:38.980    10:11:33 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:14:38.980     10:11:33 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:14:38.980     10:11:33 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:14:38.980     10:11:33 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:38.980     10:11:33 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:38.980     10:11:33 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:14:38.980     10:11:33 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:14:38.980    10:11:33 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:14:38.980  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:14:39.238   10:11:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme0
00:14:39.238   10:11:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme0 ]]
00:14:39.238    10:11:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l ec6c2f06-4131-4996-abbd-9a4dc16b99f9 /sys/class/nvme/nvme0/nvme*/uuid'
00:14:39.238    10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:14:39.238    10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:39.238    10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:39.238    10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:14:39.238    10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:14:39.238     10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:14:39.238     10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:14:39.238     10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:39.238     10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:39.238     10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:14:39.238     10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:14:39.238    10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l ec6c2f06-4131-4996-abbd-9a4dc16b99f9 /sys/class/nvme/nvme0/nvme*/uuid'
00:14:39.238  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:14:39.238   10:11:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=
00:14:39.238   10:11:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z '' ]]
00:14:39.238   10:11:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@84 -- # return 1
00:14:39.238   10:11:34 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:14:39.238   10:11:34 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:14:39.238   10:11:34 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:14:39.238   10:11:34 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:14:39.238   10:11:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@245 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 ec6c2f06-4131-4996-abbd-9a4dc16b99f9
00:14:39.238   10:11:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:14:39.238   10:11:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-1
00:14:39.238   10:11:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=ec6c2f06-4131-4996-abbd-9a4dc16b99f9
00:14:39.238    10:11:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:14:39.238    10:11:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:14:39.238    10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:14:39.238    10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:39.238    10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:39.239    10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:14:39.239    10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:14:39.239     10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:14:39.239     10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:14:39.239     10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:39.239     10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:39.239     10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:14:39.239     10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:14:39.239    10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:14:39.239  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:14:39.497   10:11:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme1
00:14:39.497   10:11:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme1 ]]
00:14:39.497    10:11:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l ec6c2f06-4131-4996-abbd-9a4dc16b99f9 /sys/class/nvme/nvme1/nvme*/uuid'
00:14:39.497    10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:14:39.497    10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:39.497    10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:39.497    10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:14:39.497    10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:14:39.497     10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:14:39.497     10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:14:39.497     10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:39.497     10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:39.497     10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:14:39.497     10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:14:39.497    10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l ec6c2f06-4131-4996-abbd-9a4dc16b99f9 /sys/class/nvme/nvme1/nvme*/uuid'
00:14:39.497  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:14:39.497   10:11:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=/sys/class/nvme/nvme1/nvme1c1n1/uuid
00:14:39.497   10:11:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z /sys/class/nvme/nvme1/nvme1c1n1/uuid ]]
00:14:39.497   10:11:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@246 -- # NOT vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:39.497   10:11:34 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:14:39.497   10:11:34 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:39.497   10:11:34 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=vm_check_subsys_volume
00:14:39.497   10:11:34 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:39.497    10:11:34 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t vm_check_subsys_volume
00:14:39.497   10:11:34 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:39.498   10:11:34 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:39.498   10:11:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:14:39.498   10:11:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-1
00:14:39.498   10:11:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:39.498    10:11:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:14:39.498    10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:14:39.498    10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:39.498    10:11:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:14:39.498    10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:39.498    10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:14:39.498    10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:14:39.498     10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:14:39.498     10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:14:39.498     10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:39.498     10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:39.498     10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:14:39.498     10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:14:39.498    10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:14:39.756  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:14:39.756   10:11:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme1
00:14:39.756   10:11:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme1 ]]
00:14:39.756    10:11:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 23ff93a2-6fde-435f-94f1-e70641ce1d2f /sys/class/nvme/nvme1/nvme*/uuid'
00:14:39.756    10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:14:39.756    10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:39.756    10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:39.756    10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:14:39.756    10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:14:39.756     10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:14:39.756     10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:14:39.756     10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:39.756     10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:39.756     10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:14:39.756     10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:14:39.756    10:11:34 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 23ff93a2-6fde-435f-94f1-e70641ce1d2f /sys/class/nvme/nvme1/nvme*/uuid'
00:14:39.756  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:14:40.014   10:11:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=
00:14:40.014   10:11:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z '' ]]
00:14:40.014   10:11:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@84 -- # return 1
00:14:40.014   10:11:34 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:14:40.014   10:11:34 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:14:40.014   10:11:34 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:14:40.014   10:11:34 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:14:40.014   10:11:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@249 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 ec6c2f06-4131-4996-abbd-9a4dc16b99f9
00:14:40.014   10:11:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:40.014    10:11:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 ec6c2f06-4131-4996-abbd-9a4dc16b99f9
00:14:40.014    10:11:34 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:14:40.272  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:40.272  I0000 00:00:1732093895.198902 1809348 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:40.272  I0000 00:00:1732093895.200651 1809348 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:40.272  {}
00:14:40.272   10:11:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@250 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-1 23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:40.272   10:11:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:40.272    10:11:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:40.272    10:11:35 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:14:40.530  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:40.530  I0000 00:00:1732093895.594829 1809381 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:40.530  I0000 00:00:1732093895.596649 1809381 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:40.530  {}
00:14:40.788    10:11:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@251 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:14:40.788    10:11:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@251 -- # jq -r '.[0].namespaces | length'
00:14:40.788    10:11:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:40.788    10:11:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:40.788    10:11:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:40.788   10:11:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@251 -- # [[ 1 -eq 1 ]]
00:14:40.788    10:11:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@252 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:14:40.788    10:11:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:40.788    10:11:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@252 -- # jq -r '.[0].namespaces | length'
00:14:40.788    10:11:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:40.788    10:11:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:40.788   10:11:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@252 -- # [[ 1 -eq 1 ]]
00:14:40.788    10:11:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@253 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:14:40.788    10:11:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:40.788    10:11:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:40.788    10:11:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@253 -- # jq -r '.[0].namespaces[0].uuid'
00:14:40.788    10:11:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:40.788   10:11:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@253 -- # [[ 23ff93a2-6fde-435f-94f1-e70641ce1d2f == \2\3\f\f\9\3\a\2\-\6\f\d\e\-\4\3\5\f\-\9\4\f\1\-\e\7\0\6\4\1\c\e\1\d\2\f ]]
00:14:40.788    10:11:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@254 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:14:40.788    10:11:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:40.788    10:11:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@254 -- # jq -r '.[0].namespaces[0].uuid'
00:14:40.788    10:11:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:40.789    10:11:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:40.789   10:11:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@254 -- # [[ ec6c2f06-4131-4996-abbd-9a4dc16b99f9 == \e\c\6\c\2\f\0\6\-\4\1\3\1\-\4\9\9\6\-\a\b\b\d\-\9\a\4\d\c\1\6\b\9\9\f\9 ]]
00:14:40.789   10:11:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@255 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:40.789   10:11:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:14:40.789   10:11:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-0
00:14:40.789   10:11:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:40.789    10:11:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:14:40.789    10:11:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:14:40.789    10:11:35 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:14:40.789    10:11:35 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:40.789    10:11:35 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:40.789    10:11:35 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:14:40.789    10:11:35 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:14:40.789     10:11:35 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:14:40.789     10:11:35 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:14:40.789     10:11:35 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:40.789     10:11:35 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:40.789     10:11:35 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:14:40.789     10:11:35 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:14:40.789    10:11:35 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:14:40.789  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:14:41.047   10:11:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme0
00:14:41.047   10:11:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme0 ]]
00:14:41.047    10:11:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 23ff93a2-6fde-435f-94f1-e70641ce1d2f /sys/class/nvme/nvme0/nvme*/uuid'
00:14:41.047    10:11:35 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:14:41.047    10:11:35 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:41.047    10:11:35 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:41.047    10:11:35 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:14:41.047    10:11:35 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:14:41.047     10:11:35 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:14:41.047     10:11:35 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:14:41.047     10:11:35 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:41.047     10:11:35 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:41.047     10:11:35 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:14:41.047     10:11:35 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:14:41.047    10:11:35 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 23ff93a2-6fde-435f-94f1-e70641ce1d2f /sys/class/nvme/nvme0/nvme*/uuid'
00:14:41.047  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:14:41.047   10:11:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=/sys/class/nvme/nvme0/nvme0c0n1/uuid
00:14:41.047   10:11:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z /sys/class/nvme/nvme0/nvme0c0n1/uuid ]]
00:14:41.047   10:11:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@256 -- # NOT vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 ec6c2f06-4131-4996-abbd-9a4dc16b99f9
00:14:41.047   10:11:36 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:14:41.048   10:11:36 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 ec6c2f06-4131-4996-abbd-9a4dc16b99f9
00:14:41.048   10:11:36 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=vm_check_subsys_volume
00:14:41.048   10:11:36 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:41.048    10:11:36 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t vm_check_subsys_volume
00:14:41.048   10:11:36 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:41.048   10:11:36 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 ec6c2f06-4131-4996-abbd-9a4dc16b99f9
00:14:41.048   10:11:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:14:41.048   10:11:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-0
00:14:41.048   10:11:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=ec6c2f06-4131-4996-abbd-9a4dc16b99f9
00:14:41.048    10:11:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:14:41.048    10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:14:41.048    10:11:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:14:41.048    10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:41.048    10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:41.048    10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:14:41.048    10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:14:41.048     10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:14:41.048     10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:14:41.048     10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:41.048     10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:41.048     10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:14:41.048     10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:14:41.048    10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:14:41.048  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:14:41.306   10:11:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme0
00:14:41.306   10:11:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme0 ]]
00:14:41.306    10:11:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l ec6c2f06-4131-4996-abbd-9a4dc16b99f9 /sys/class/nvme/nvme0/nvme*/uuid'
00:14:41.306    10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:14:41.306    10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:41.306    10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:41.306    10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:14:41.306    10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:14:41.306     10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:14:41.306     10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:14:41.306     10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:41.306     10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:41.306     10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:14:41.306     10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:14:41.306    10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l ec6c2f06-4131-4996-abbd-9a4dc16b99f9 /sys/class/nvme/nvme0/nvme*/uuid'
00:14:41.306  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:14:41.306   10:11:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=
00:14:41.306   10:11:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z '' ]]
00:14:41.306   10:11:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@84 -- # return 1
00:14:41.306   10:11:36 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:14:41.306   10:11:36 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:14:41.306   10:11:36 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:14:41.306   10:11:36 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:14:41.306   10:11:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@257 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 ec6c2f06-4131-4996-abbd-9a4dc16b99f9
00:14:41.306   10:11:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:14:41.306   10:11:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-1
00:14:41.306   10:11:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=ec6c2f06-4131-4996-abbd-9a4dc16b99f9
00:14:41.565    10:11:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:14:41.565    10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:14:41.565    10:11:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:14:41.565    10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:41.565    10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:41.565    10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:14:41.565    10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:14:41.565     10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:14:41.565     10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:14:41.565     10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:41.565     10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:41.565     10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:14:41.565     10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:14:41.565    10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:14:41.565  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:14:41.565   10:11:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme1
00:14:41.565   10:11:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme1 ]]
00:14:41.565    10:11:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l ec6c2f06-4131-4996-abbd-9a4dc16b99f9 /sys/class/nvme/nvme1/nvme*/uuid'
00:14:41.565    10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:14:41.565    10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:41.565    10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:41.565    10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:14:41.565    10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:14:41.565     10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:14:41.565     10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:14:41.565     10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:41.565     10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:41.565     10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:14:41.565     10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:14:41.565    10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l ec6c2f06-4131-4996-abbd-9a4dc16b99f9 /sys/class/nvme/nvme1/nvme*/uuid'
00:14:41.565  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:14:41.824   10:11:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=/sys/class/nvme/nvme1/nvme1c1n1/uuid
00:14:41.824   10:11:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z /sys/class/nvme/nvme1/nvme1c1n1/uuid ]]
00:14:41.824   10:11:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@258 -- # NOT vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:41.824   10:11:36 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:14:41.824   10:11:36 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:41.824   10:11:36 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=vm_check_subsys_volume
00:14:41.824   10:11:36 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:41.824    10:11:36 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t vm_check_subsys_volume
00:14:41.824   10:11:36 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:41.824   10:11:36 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:41.824   10:11:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:14:41.824   10:11:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-1
00:14:41.824   10:11:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:41.824    10:11:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:14:41.824    10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:14:41.824    10:11:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:14:41.824    10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:41.824    10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:41.824    10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:14:41.824    10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:14:41.824     10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:14:41.824     10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:14:41.824     10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:41.824     10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:41.824     10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:14:41.824     10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:14:41.824    10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:14:41.824  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:14:41.824   10:11:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme1
00:14:41.824   10:11:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme1 ]]
00:14:41.824    10:11:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 23ff93a2-6fde-435f-94f1-e70641ce1d2f /sys/class/nvme/nvme1/nvme*/uuid'
00:14:41.824    10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:14:41.824    10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:41.824    10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:41.824    10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:14:41.824    10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:14:41.824     10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:14:41.824     10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:14:41.824     10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:41.824     10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:41.824     10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:14:41.824     10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:14:41.824    10:11:36 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 23ff93a2-6fde-435f-94f1-e70641ce1d2f /sys/class/nvme/nvme1/nvme*/uuid'
00:14:41.824  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:14:42.082   10:11:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=
00:14:42.082   10:11:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z '' ]]
00:14:42.082   10:11:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@84 -- # return 1
00:14:42.082   10:11:37 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:14:42.082   10:11:37 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:14:42.082   10:11:37 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:14:42.082   10:11:37 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:14:42.082   10:11:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@261 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:42.082   10:11:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:42.082    10:11:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:42.082    10:11:37 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:14:42.341  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:42.341  I0000 00:00:1732093897.364147 1809726 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:42.341  I0000 00:00:1732093897.366009 1809726 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:42.341  {}
00:14:42.341   10:11:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@262 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-1 ec6c2f06-4131-4996-abbd-9a4dc16b99f9
00:14:42.341   10:11:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:42.341    10:11:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 ec6c2f06-4131-4996-abbd-9a4dc16b99f9
00:14:42.341    10:11:37 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:14:42.907  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:42.907  I0000 00:00:1732093897.764808 1809755 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:42.907  I0000 00:00:1732093897.766771 1809755 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:42.907  {}
00:14:42.907    10:11:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@263 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:14:42.907    10:11:37 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:42.907    10:11:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@263 -- # jq -r '.[0].namespaces | length'
00:14:42.907    10:11:37 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:42.907    10:11:37 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:42.907   10:11:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@263 -- # [[ 0 -eq 0 ]]
00:14:42.907    10:11:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@264 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:14:42.907    10:11:37 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:42.907    10:11:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@264 -- # jq -r '.[0].namespaces | length'
00:14:42.907    10:11:37 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:42.907    10:11:37 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:42.907   10:11:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@264 -- # [[ 0 -eq 0 ]]
00:14:42.907   10:11:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@265 -- # NOT vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:42.907   10:11:37 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:14:42.907   10:11:37 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:42.907   10:11:37 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=vm_check_subsys_volume
00:14:42.907   10:11:37 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:42.907    10:11:37 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t vm_check_subsys_volume
00:14:42.907   10:11:37 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:42.908   10:11:37 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:42.908   10:11:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:14:42.908   10:11:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-0
00:14:42.908   10:11:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:42.908    10:11:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:14:42.908    10:11:37 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:14:42.908    10:11:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:14:42.908    10:11:37 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:42.908    10:11:37 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:42.908    10:11:37 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:14:42.908    10:11:37 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:14:42.908     10:11:37 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:14:42.908     10:11:37 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:14:42.908     10:11:37 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:42.908     10:11:37 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:42.908     10:11:37 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:14:42.908     10:11:37 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:14:42.908    10:11:37 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:14:42.908  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:14:43.166   10:11:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme0
00:14:43.166   10:11:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme0 ]]
00:14:43.166    10:11:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 23ff93a2-6fde-435f-94f1-e70641ce1d2f /sys/class/nvme/nvme0/nvme*/uuid'
00:14:43.166    10:11:38 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:14:43.166    10:11:38 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:43.166    10:11:38 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:43.166    10:11:38 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:14:43.166    10:11:38 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:14:43.166     10:11:38 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:14:43.166     10:11:38 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:14:43.166     10:11:38 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:43.166     10:11:38 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:43.166     10:11:38 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:14:43.166     10:11:38 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:14:43.166    10:11:38 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 23ff93a2-6fde-435f-94f1-e70641ce1d2f /sys/class/nvme/nvme0/nvme*/uuid'
00:14:43.166  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:14:43.166  grep: /sys/class/nvme/nvme0/nvme*/uuid: No such file or directory
00:14:43.166   10:11:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=
00:14:43.166   10:11:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z '' ]]
00:14:43.166   10:11:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@84 -- # return 1
00:14:43.166   10:11:38 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:14:43.166   10:11:38 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:14:43.166   10:11:38 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:14:43.166   10:11:38 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:14:43.166   10:11:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@266 -- # NOT vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 ec6c2f06-4131-4996-abbd-9a4dc16b99f9
00:14:43.166   10:11:38 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:14:43.166   10:11:38 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 ec6c2f06-4131-4996-abbd-9a4dc16b99f9
00:14:43.166   10:11:38 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=vm_check_subsys_volume
00:14:43.166   10:11:38 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:43.166    10:11:38 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t vm_check_subsys_volume
00:14:43.166   10:11:38 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:43.166   10:11:38 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 ec6c2f06-4131-4996-abbd-9a4dc16b99f9
00:14:43.166   10:11:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:14:43.166   10:11:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-1
00:14:43.166   10:11:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=ec6c2f06-4131-4996-abbd-9a4dc16b99f9
00:14:43.166    10:11:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:14:43.166    10:11:38 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:14:43.166    10:11:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:14:43.166    10:11:38 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:43.166    10:11:38 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:43.166    10:11:38 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:14:43.166    10:11:38 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:14:43.167     10:11:38 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:14:43.167     10:11:38 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:14:43.167     10:11:38 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:43.167     10:11:38 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:43.167     10:11:38 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:14:43.167     10:11:38 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:14:43.167    10:11:38 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:14:43.167  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:14:43.425   10:11:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme1
00:14:43.425   10:11:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme1 ]]
00:14:43.425    10:11:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l ec6c2f06-4131-4996-abbd-9a4dc16b99f9 /sys/class/nvme/nvme1/nvme*/uuid'
00:14:43.425    10:11:38 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:14:43.425    10:11:38 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:43.425    10:11:38 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:43.425    10:11:38 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:14:43.425    10:11:38 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:14:43.425     10:11:38 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:14:43.425     10:11:38 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:14:43.425     10:11:38 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:43.425     10:11:38 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:43.425     10:11:38 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:14:43.425     10:11:38 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:14:43.425    10:11:38 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l ec6c2f06-4131-4996-abbd-9a4dc16b99f9 /sys/class/nvme/nvme1/nvme*/uuid'
00:14:43.425  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:14:43.425  grep: /sys/class/nvme/nvme1/nvme*/uuid: No such file or directory
00:14:43.425   10:11:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=
00:14:43.425   10:11:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z '' ]]
00:14:43.425   10:11:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@84 -- # return 1
00:14:43.425   10:11:38 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:14:43.425   10:11:38 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:14:43.425   10:11:38 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:14:43.425   10:11:38 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:14:43.425   10:11:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@269 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:43.425   10:11:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:43.425    10:11:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:43.425    10:11:38 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:14:43.991  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:43.991  I0000 00:00:1732093898.872213 1809944 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:43.991  I0000 00:00:1732093898.874127 1809944 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:43.991  {}
00:14:43.991   10:11:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@270 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-1 ec6c2f06-4131-4996-abbd-9a4dc16b99f9
00:14:43.991   10:11:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:43.991    10:11:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 ec6c2f06-4131-4996-abbd-9a4dc16b99f9
00:14:43.991    10:11:38 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:14:44.250  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:44.250  I0000 00:00:1732093899.212262 1810085 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:44.250  I0000 00:00:1732093899.214179 1810085 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:44.250  {}
00:14:44.250   10:11:39 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@271 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 ec6c2f06-4131-4996-abbd-9a4dc16b99f9
00:14:44.250   10:11:39 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:44.250    10:11:39 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 ec6c2f06-4131-4996-abbd-9a4dc16b99f9
00:14:44.250    10:11:39 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:14:44.508  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:44.508  I0000 00:00:1732093899.547933 1810119 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:44.508  I0000 00:00:1732093899.549829 1810119 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:44.508  {}
00:14:44.508   10:11:39 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@272 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-1 23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:44.508   10:11:39 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:44.508    10:11:39 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:44.508    10:11:39 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:14:45.073  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:45.073  I0000 00:00:1732093899.937681 1810146 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:45.073  I0000 00:00:1732093899.939615 1810146 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:45.073  {}
00:14:45.073   10:11:39 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@274 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-0
00:14:45.073   10:11:39 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:45.331  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:45.332  I0000 00:00:1732093900.247010 1810237 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:45.332  I0000 00:00:1732093900.248898 1810237 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:45.332  {}
00:14:45.332   10:11:40 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@275 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-1
00:14:45.332   10:11:40 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:45.590  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:45.590  I0000 00:00:1732093900.524166 1810321 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:45.590  I0000 00:00:1732093900.525923 1810321 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:45.590  {}
00:14:45.590    10:11:40 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@278 -- # create_device 42 0
00:14:45.590    10:11:40 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@278 -- # jq -r .handle
00:14:45.590    10:11:40 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=42
00:14:45.590    10:11:40 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:14:45.590    10:11:40 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:45.848  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:45.848  I0000 00:00:1732093900.803275 1810346 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:45.848  I0000 00:00:1732093900.805182 1810346 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:45.848  [2024-11-20 10:11:40.810161] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-42' does not exist
00:14:46.106   10:11:40 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@278 -- # device3=nvme:nqn.2016-06.io.spdk:vfiouser-42
00:14:46.106   10:11:40 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@279 -- # vm_check_subsys_nqn 0 nqn.2016-06.io.spdk:vfiouser-42
00:14:46.106   10:11:40 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@89 -- # sleep 1
00:14:46.106  [2024-11-20 10:11:41.090217] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-42: enabling controller
00:14:47.040    10:11:41 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-42 /sys/class/nvme/*/subsysnqn'
00:14:47.040    10:11:41 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:14:47.040    10:11:41 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:47.040    10:11:41 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:47.040    10:11:41 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:14:47.040    10:11:41 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:14:47.040     10:11:41 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:14:47.040     10:11:41 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:14:47.040     10:11:41 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:47.040     10:11:41 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:47.040     10:11:41 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:14:47.040     10:11:41 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:14:47.040    10:11:41 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-42 /sys/class/nvme/*/subsysnqn'
00:14:47.040  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:14:47.040   10:11:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # nqn=/sys/class/nvme/nvme0/subsysnqn
00:14:47.040   10:11:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@91 -- # [[ -z /sys/class/nvme/nvme0/subsysnqn ]]
00:14:47.040   10:11:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@282 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-42
00:14:47.040   10:11:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:47.299  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:47.299  I0000 00:00:1732093902.402388 1810507 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:47.299  I0000 00:00:1732093902.404221 1810507 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:47.561  {}
00:14:47.561   10:11:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@283 -- # NOT vm_check_subsys_nqn 0 nqn.2016-06.io.spdk:vfiouser-42
00:14:47.561   10:11:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:14:47.561   10:11:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg vm_check_subsys_nqn 0 nqn.2016-06.io.spdk:vfiouser-42
00:14:47.561   10:11:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=vm_check_subsys_nqn
00:14:47.561   10:11:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:47.561    10:11:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t vm_check_subsys_nqn
00:14:47.561   10:11:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:47.561   10:11:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # vm_check_subsys_nqn 0 nqn.2016-06.io.spdk:vfiouser-42
00:14:47.561   10:11:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@89 -- # sleep 1
00:14:48.557    10:11:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-42 /sys/class/nvme/*/subsysnqn'
00:14:48.557    10:11:43 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:14:48.557    10:11:43 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:48.557    10:11:43 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:48.557    10:11:43 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:14:48.557    10:11:43 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:14:48.557     10:11:43 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:14:48.557     10:11:43 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:14:48.557     10:11:43 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:48.557     10:11:43 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:48.557     10:11:43 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:14:48.557     10:11:43 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:14:48.557    10:11:43 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-42 /sys/class/nvme/*/subsysnqn'
00:14:48.557  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:14:48.557  grep: /sys/class/nvme/*/subsysnqn: No such file or directory
00:14:48.557   10:11:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # nqn=
00:14:48.557   10:11:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@91 -- # [[ -z '' ]]
00:14:48.557   10:11:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@92 -- # error 'FAILED no NVMe on vm=0 with nqn=nqn.2016-06.io.spdk:vfiouser-42'
00:14:48.557   10:11:43 sma.sma_vfiouser_qemu -- vhost/common.sh@82 -- # echo ===========
00:14:48.557  ===========
00:14:48.557   10:11:43 sma.sma_vfiouser_qemu -- vhost/common.sh@83 -- # message ERROR 'FAILED no NVMe on vm=0 with nqn=nqn.2016-06.io.spdk:vfiouser-42'
00:14:48.557   10:11:43 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out
00:14:48.557   10:11:43 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false
00:14:48.557   10:11:43 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out=
00:14:48.557   10:11:43 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=ERROR
00:14:48.557   10:11:43 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift
00:14:48.557   10:11:43 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'ERROR: FAILED no NVMe on vm=0 with nqn=nqn.2016-06.io.spdk:vfiouser-42'
00:14:48.557  ERROR: FAILED no NVMe on vm=0 with nqn=nqn.2016-06.io.spdk:vfiouser-42
00:14:48.558   10:11:43 sma.sma_vfiouser_qemu -- vhost/common.sh@84 -- # echo ===========
00:14:48.558  ===========
00:14:48.558   10:11:43 sma.sma_vfiouser_qemu -- vhost/common.sh@86 -- # false
00:14:48.558   10:11:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@93 -- # return 1
00:14:48.558   10:11:43 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:14:48.558   10:11:43 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:14:48.558   10:11:43 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:14:48.558   10:11:43 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:14:48.558   10:11:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@285 -- # key0=1234567890abcdef1234567890abcdef
00:14:48.558    10:11:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@286 -- # create_device 0 0
00:14:48.558    10:11:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@286 -- # jq -r .handle
00:14:48.558    10:11:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=0
00:14:48.558    10:11:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:14:48.558    10:11:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:48.816  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:48.816  I0000 00:00:1732093903.851163 1810804 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:48.816  I0000 00:00:1732093903.853098 1810804 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:48.816  [2024-11-20 10:11:43.859541] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist
00:14:49.074   10:11:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@286 -- # device0=nvme:nqn.2016-06.io.spdk:vfiouser-0
00:14:49.074    10:11:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@287 -- # rpc_cmd bdev_get_bdevs -b null0
00:14:49.074    10:11:44 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:49.074    10:11:44 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:49.074    10:11:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@287 -- # jq -r '.[].uuid'
00:14:49.074    10:11:44 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:49.074   10:11:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@287 -- # uuid0=23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:49.074   10:11:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@290 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:49.074    10:11:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@290 -- # uuid2base64 23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:49.074    10:11:44 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:14:49.074    10:11:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@290 -- # get_cipher AES_CBC
00:14:49.074    10:11:44 sma.sma_vfiouser_qemu -- sma/common.sh@27 -- # case "$1" in
00:14:49.074    10:11:44 sma.sma_vfiouser_qemu -- sma/common.sh@28 -- # echo 0
00:14:49.074    10:11:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@290 -- # format_key 1234567890abcdef1234567890abcdef
00:14:49.074    10:11:44 sma.sma_vfiouser_qemu -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:14:49.074     10:11:44 sma.sma_vfiouser_qemu -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:14:49.074  [2024-11-20 10:11:44.127469] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-0: enabling controller
00:14:49.332  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:49.332  I0000 00:00:1732093904.345590 1810835 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:49.332  I0000 00:00:1732093904.347518 1810835 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:49.332  {}
00:14:49.332    10:11:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@307 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:14:49.332    10:11:44 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:49.332    10:11:44 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:49.332    10:11:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@307 -- # jq -r '.[0].namespaces[0].name'
00:14:49.332    10:11:44 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:49.591   10:11:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@307 -- # ns_bdev=7f22a75e-74d3-4c1c-a0aa-fcb161f94e2c
00:14:49.591    10:11:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@308 -- # rpc_cmd bdev_get_bdevs -b 7f22a75e-74d3-4c1c-a0aa-fcb161f94e2c
00:14:49.591    10:11:44 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:49.591    10:11:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@308 -- # jq -r '.[0].product_name'
00:14:49.591    10:11:44 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:49.591    10:11:44 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:49.591   10:11:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@308 -- # [[ crypto == \c\r\y\p\t\o ]]
00:14:49.591    10:11:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@309 -- # rpc_cmd bdev_get_bdevs -b 7f22a75e-74d3-4c1c-a0aa-fcb161f94e2c
00:14:49.591    10:11:44 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:49.591    10:11:44 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:49.591    10:11:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@309 -- # jq -r '.[] | select(.product_name == "crypto")'
00:14:49.591    10:11:44 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:49.591   10:11:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@309 -- # crypto_bdev='{
00:14:49.591    "name": "7f22a75e-74d3-4c1c-a0aa-fcb161f94e2c",
00:14:49.591    "aliases": [
00:14:49.591      "bf8d531b-f6f8-5df2-ba63-f88ee85b1f3d"
00:14:49.591    ],
00:14:49.591    "product_name": "crypto",
00:14:49.591    "block_size": 4096,
00:14:49.591    "num_blocks": 25600,
00:14:49.591    "uuid": "bf8d531b-f6f8-5df2-ba63-f88ee85b1f3d",
00:14:49.591    "assigned_rate_limits": {
00:14:49.591      "rw_ios_per_sec": 0,
00:14:49.591      "rw_mbytes_per_sec": 0,
00:14:49.591      "r_mbytes_per_sec": 0,
00:14:49.591      "w_mbytes_per_sec": 0
00:14:49.591    },
00:14:49.592    "claimed": true,
00:14:49.592    "claim_type": "exclusive_write",
00:14:49.592    "zoned": false,
00:14:49.592    "supported_io_types": {
00:14:49.592      "read": true,
00:14:49.592      "write": true,
00:14:49.592      "unmap": false,
00:14:49.592      "flush": false,
00:14:49.592      "reset": true,
00:14:49.592      "nvme_admin": false,
00:14:49.592      "nvme_io": false,
00:14:49.592      "nvme_io_md": false,
00:14:49.592      "write_zeroes": true,
00:14:49.592      "zcopy": false,
00:14:49.592      "get_zone_info": false,
00:14:49.592      "zone_management": false,
00:14:49.592      "zone_append": false,
00:14:49.592      "compare": false,
00:14:49.592      "compare_and_write": false,
00:14:49.592      "abort": false,
00:14:49.592      "seek_hole": false,
00:14:49.592      "seek_data": false,
00:14:49.592      "copy": false,
00:14:49.592      "nvme_iov_md": false
00:14:49.592    },
00:14:49.592    "memory_domains": [
00:14:49.592      {
00:14:49.592        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:14:49.592        "dma_device_type": 2
00:14:49.592      }
00:14:49.592    ],
00:14:49.592    "driver_specific": {
00:14:49.592      "crypto": {
00:14:49.592        "base_bdev_name": "null0",
00:14:49.592        "name": "7f22a75e-74d3-4c1c-a0aa-fcb161f94e2c",
00:14:49.592        "key_name": "7f22a75e-74d3-4c1c-a0aa-fcb161f94e2c_AES_CBC"
00:14:49.592      }
00:14:49.592    }
00:14:49.592  }'
00:14:49.592    10:11:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@310 -- # rpc_cmd bdev_get_bdevs
00:14:49.592    10:11:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@310 -- # jq -r '[.[] | select(.product_name == "crypto")] | length'
00:14:49.592    10:11:44 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:49.592    10:11:44 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:49.592    10:11:44 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:49.592   10:11:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@310 -- # [[ 1 -eq 1 ]]
00:14:49.592    10:11:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@312 -- # jq -r .driver_specific.crypto.key_name
00:14:49.592   10:11:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@312 -- # key_name=7f22a75e-74d3-4c1c-a0aa-fcb161f94e2c_AES_CBC
00:14:49.592    10:11:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@313 -- # rpc_cmd accel_crypto_keys_get -k 7f22a75e-74d3-4c1c-a0aa-fcb161f94e2c_AES_CBC
00:14:49.592    10:11:44 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:49.592    10:11:44 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:49.592    10:11:44 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:49.592   10:11:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@313 -- # key_obj='[
00:14:49.592  {
00:14:49.592  "name": "7f22a75e-74d3-4c1c-a0aa-fcb161f94e2c_AES_CBC",
00:14:49.592  "cipher": "AES_CBC",
00:14:49.592  "key": "1234567890abcdef1234567890abcdef"
00:14:49.592  }
00:14:49.592  ]'
00:14:49.592    10:11:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@314 -- # jq -r '.[0].key'
00:14:49.592   10:11:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@314 -- # [[ 1234567890abcdef1234567890abcdef == \1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f\1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f ]]
00:14:49.592    10:11:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@315 -- # jq -r '.[0].cipher'
00:14:49.592   10:11:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@315 -- # [[ AES_CBC == \A\E\S\_\C\B\C ]]
00:14:49.592   10:11:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@317 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:49.592   10:11:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:49.592    10:11:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:49.592    10:11:44 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:14:50.158  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:50.158  I0000 00:00:1732093904.978236 1811006 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:50.158  I0000 00:00:1732093904.980335 1811006 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:50.158  {}
00:14:50.158   10:11:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@318 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-0
00:14:50.158   10:11:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:50.416  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:50.416  I0000 00:00:1732093905.295594 1811034 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:50.416  I0000 00:00:1732093905.297460 1811034 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:50.416  {}
00:14:50.416    10:11:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@319 -- # rpc_cmd bdev_get_bdevs
00:14:50.416    10:11:45 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:50.416    10:11:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@319 -- # jq -r '.[] | select(.product_name == "crypto")'
00:14:50.416    10:11:45 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:50.416    10:11:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@319 -- # jq -r length
00:14:50.416    10:11:45 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:50.416   10:11:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@319 -- # [[ '' -eq 0 ]]
00:14:50.416   10:11:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@322 -- # device_vfio_user=1
00:14:50.416    10:11:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@323 -- # create_device 0 0
00:14:50.416    10:11:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=0
00:14:50.416    10:11:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@323 -- # jq -r .handle
00:14:50.416    10:11:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:14:50.416    10:11:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:50.675  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:50.675  I0000 00:00:1732093905.620652 1811072 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:50.675  I0000 00:00:1732093905.622642 1811072 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:50.675  [2024-11-20 10:11:45.625304] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist
00:14:50.675   10:11:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@323 -- # device0=nvme:nqn.2016-06.io.spdk:vfiouser-0
00:14:50.675   10:11:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@324 -- # attach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:50.675   10:11:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:50.675    10:11:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # uuid2base64 23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:50.675    10:11:45 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:14:50.933  [2024-11-20 10:11:45.886201] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-0: enabling controller
00:14:51.191  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:51.191  I0000 00:00:1732093906.067557 1811212 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:51.191  I0000 00:00:1732093906.069454 1811212 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:51.191  {}
00:14:51.191   10:11:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@327 -- # diff /dev/fd/62 /dev/fd/61
00:14:51.191    10:11:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@327 -- # jq --sort-keys
00:14:51.191    10:11:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@327 -- # get_qos_caps 1
00:14:51.191    10:11:46 sma.sma_vfiouser_qemu -- sma/common.sh@45 -- # local rootdir
00:14:51.191    10:11:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@327 -- # jq --sort-keys
00:14:51.191     10:11:46 sma.sma_vfiouser_qemu -- sma/common.sh@47 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:14:51.191    10:11:46 sma.sma_vfiouser_qemu -- sma/common.sh@47 -- # rootdir=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../..
00:14:51.191    10:11:46 sma.sma_vfiouser_qemu -- sma/common.sh@49 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py
00:14:51.449  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:51.449  I0000 00:00:1732093906.372632 1811250 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:51.449  I0000 00:00:1732093906.374530 1811250 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:51.449   10:11:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@340 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:51.449    10:11:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@340 -- # uuid2base64 23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:51.449    10:11:46 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:14:51.707  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:51.707  I0000 00:00:1732093906.678746 1811277 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:51.707  I0000 00:00:1732093906.680465 1811277 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:51.707  {}
00:14:51.707   10:11:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@359 -- # diff /dev/fd/62 /dev/fd/61
00:14:51.707    10:11:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@359 -- # jq --sort-keys
00:14:51.707    10:11:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@359 -- # rpc_cmd bdev_get_bdevs -b null0
00:14:51.707    10:11:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@359 -- # jq --sort-keys '.[].assigned_rate_limits'
00:14:51.707    10:11:46 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:51.707    10:11:46 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:51.707    10:11:46 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:51.707   10:11:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@370 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:51.707   10:11:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:51.707    10:11:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 23ff93a2-6fde-435f-94f1-e70641ce1d2f
00:14:51.707    10:11:46 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:14:51.966  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:51.966  I0000 00:00:1732093907.051717 1811364 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:51.966  I0000 00:00:1732093907.053529 1811364 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:52.224  {}
00:14:52.224   10:11:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@371 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-0
00:14:52.224   10:11:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:52.482  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:52.482  I0000 00:00:1732093907.360833 1811456 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:52.482  I0000 00:00:1732093907.362685 1811456 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:52.482  {}
00:14:52.482   10:11:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@373 -- # cleanup
00:14:52.482   10:11:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@98 -- # vm_kill_all
00:14:52.482   10:11:47 sma.sma_vfiouser_qemu -- vhost/common.sh@476 -- # local vm
00:14:52.482    10:11:47 sma.sma_vfiouser_qemu -- vhost/common.sh@477 -- # vm_list_all
00:14:52.482    10:11:47 sma.sma_vfiouser_qemu -- vhost/common.sh@466 -- # vms=()
00:14:52.483    10:11:47 sma.sma_vfiouser_qemu -- vhost/common.sh@466 -- # local vms
00:14:52.483    10:11:47 sma.sma_vfiouser_qemu -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:14:52.483    10:11:47 sma.sma_vfiouser_qemu -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:14:52.483    10:11:47 sma.sma_vfiouser_qemu -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/0
00:14:52.483   10:11:47 sma.sma_vfiouser_qemu -- vhost/common.sh@477 -- # for vm in $(vm_list_all)
00:14:52.483   10:11:47 sma.sma_vfiouser_qemu -- vhost/common.sh@478 -- # vm_kill 0
00:14:52.483   10:11:47 sma.sma_vfiouser_qemu -- vhost/common.sh@442 -- # vm_num_is_valid 0
00:14:52.483   10:11:47 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:52.483   10:11:47 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:52.483   10:11:47 sma.sma_vfiouser_qemu -- vhost/common.sh@443 -- # local vm_dir=/root/vhost_test/vms/0
00:14:52.483   10:11:47 sma.sma_vfiouser_qemu -- vhost/common.sh@445 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:14:52.483   10:11:47 sma.sma_vfiouser_qemu -- vhost/common.sh@449 -- # local vm_pid
00:14:52.483    10:11:47 sma.sma_vfiouser_qemu -- vhost/common.sh@450 -- # cat /root/vhost_test/vms/0/qemu.pid
00:14:52.483   10:11:47 sma.sma_vfiouser_qemu -- vhost/common.sh@450 -- # vm_pid=1804483
00:14:52.483   10:11:47 sma.sma_vfiouser_qemu -- vhost/common.sh@452 -- # notice 'Killing virtual machine /root/vhost_test/vms/0 (pid=1804483)'
00:14:52.483   10:11:47 sma.sma_vfiouser_qemu -- vhost/common.sh@94 -- # message INFO 'Killing virtual machine /root/vhost_test/vms/0 (pid=1804483)'
00:14:52.483   10:11:47 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out
00:14:52.483   10:11:47 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false
00:14:52.483   10:11:47 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out=
00:14:52.483   10:11:47 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:52.483   10:11:47 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift
00:14:52.483   10:11:47 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'INFO: Killing virtual machine /root/vhost_test/vms/0 (pid=1804483)'
00:14:52.483  INFO: Killing virtual machine /root/vhost_test/vms/0 (pid=1804483)
00:14:52.483   10:11:47 sma.sma_vfiouser_qemu -- vhost/common.sh@454 -- # /bin/kill 1804483
00:14:52.483   10:11:47 sma.sma_vfiouser_qemu -- vhost/common.sh@455 -- # notice 'process 1804483 killed'
00:14:52.483   10:11:47 sma.sma_vfiouser_qemu -- vhost/common.sh@94 -- # message INFO 'process 1804483 killed'
00:14:52.483   10:11:47 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out
00:14:52.483   10:11:47 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false
00:14:52.483   10:11:47 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out=
00:14:52.483   10:11:47 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:52.483   10:11:47 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift
00:14:52.483   10:11:47 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'INFO: process 1804483 killed'
00:14:52.483  INFO: process 1804483 killed
00:14:52.483   10:11:47 sma.sma_vfiouser_qemu -- vhost/common.sh@456 -- # rm -rf /root/vhost_test/vms/0
00:14:52.483   10:11:47 sma.sma_vfiouser_qemu -- vhost/common.sh@481 -- # rm -rf /root/vhost_test/vms
00:14:52.483   10:11:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@99 -- # killprocess 1807214
00:14:52.483   10:11:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@954 -- # '[' -z 1807214 ']'
00:14:52.483   10:11:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@958 -- # kill -0 1807214
00:14:52.483    10:11:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@959 -- # uname
00:14:52.483   10:11:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:52.483    10:11:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1807214
00:14:52.483   10:11:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:14:52.483   10:11:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:14:52.483   10:11:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1807214'
00:14:52.483  killing process with pid 1807214
00:14:52.483   10:11:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@973 -- # kill 1807214
00:14:52.483   10:11:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@978 -- # wait 1807214
00:14:54.383   10:11:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@100 -- # killprocess 1807485
00:14:54.383   10:11:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@954 -- # '[' -z 1807485 ']'
00:14:54.383   10:11:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@958 -- # kill -0 1807485
00:14:54.383    10:11:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@959 -- # uname
00:14:54.383   10:11:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:54.383    10:11:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1807485
00:14:54.383   10:11:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@960 -- # process_name=python3
00:14:54.383   10:11:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:14:54.383   10:11:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1807485'
00:14:54.383  killing process with pid 1807485
00:14:54.383   10:11:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@973 -- # kill 1807485
00:14:54.383   10:11:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@978 -- # wait 1807485
00:14:54.383   10:11:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@101 -- # '[' -e /tmp/sma/vfio-user/qemu ']'
00:14:54.383   10:11:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@101 -- # rm -rf /tmp/sma/vfio-user/qemu
00:14:54.383   10:11:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@374 -- # trap - SIGINT SIGTERM EXIT
00:14:54.383  
00:14:54.383  real	0m52.252s
00:14:54.383  user	0m39.611s
00:14:54.383  sys	0m3.868s
00:14:54.383   10:11:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:54.383   10:11:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:54.383  ************************************
00:14:54.383  END TEST sma_vfiouser_qemu
00:14:54.383  ************************************
00:14:54.383   10:11:49 sma -- sma/sma.sh@13 -- # run_test sma_plugins /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins.sh
00:14:54.383   10:11:49 sma -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:54.383   10:11:49 sma -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:54.383   10:11:49 sma -- common/autotest_common.sh@10 -- # set +x
00:14:54.383  ************************************
00:14:54.383  START TEST sma_plugins
00:14:54.383  ************************************
00:14:54.383   10:11:49 sma.sma_plugins -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins.sh
00:14:54.643  * Looking for test storage...
00:14:54.643  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:14:54.643    10:11:49 sma.sma_plugins -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:14:54.643     10:11:49 sma.sma_plugins -- common/autotest_common.sh@1693 -- # lcov --version
00:14:54.643     10:11:49 sma.sma_plugins -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:14:54.643    10:11:49 sma.sma_plugins -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:14:54.643    10:11:49 sma.sma_plugins -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:14:54.643    10:11:49 sma.sma_plugins -- scripts/common.sh@333 -- # local ver1 ver1_l
00:14:54.643    10:11:49 sma.sma_plugins -- scripts/common.sh@334 -- # local ver2 ver2_l
00:14:54.643    10:11:49 sma.sma_plugins -- scripts/common.sh@336 -- # IFS=.-:
00:14:54.643    10:11:49 sma.sma_plugins -- scripts/common.sh@336 -- # read -ra ver1
00:14:54.643    10:11:49 sma.sma_plugins -- scripts/common.sh@337 -- # IFS=.-:
00:14:54.643    10:11:49 sma.sma_plugins -- scripts/common.sh@337 -- # read -ra ver2
00:14:54.643    10:11:49 sma.sma_plugins -- scripts/common.sh@338 -- # local 'op=<'
00:14:54.643    10:11:49 sma.sma_plugins -- scripts/common.sh@340 -- # ver1_l=2
00:14:54.643    10:11:49 sma.sma_plugins -- scripts/common.sh@341 -- # ver2_l=1
00:14:54.643    10:11:49 sma.sma_plugins -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:14:54.643    10:11:49 sma.sma_plugins -- scripts/common.sh@344 -- # case "$op" in
00:14:54.643    10:11:49 sma.sma_plugins -- scripts/common.sh@345 -- # : 1
00:14:54.643    10:11:49 sma.sma_plugins -- scripts/common.sh@364 -- # (( v = 0 ))
00:14:54.643    10:11:49 sma.sma_plugins -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:54.643     10:11:49 sma.sma_plugins -- scripts/common.sh@365 -- # decimal 1
00:14:54.643     10:11:49 sma.sma_plugins -- scripts/common.sh@353 -- # local d=1
00:14:54.643     10:11:49 sma.sma_plugins -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:54.643     10:11:49 sma.sma_plugins -- scripts/common.sh@355 -- # echo 1
00:14:54.643    10:11:49 sma.sma_plugins -- scripts/common.sh@365 -- # ver1[v]=1
00:14:54.643     10:11:49 sma.sma_plugins -- scripts/common.sh@366 -- # decimal 2
00:14:54.643     10:11:49 sma.sma_plugins -- scripts/common.sh@353 -- # local d=2
00:14:54.643     10:11:49 sma.sma_plugins -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:54.643     10:11:49 sma.sma_plugins -- scripts/common.sh@355 -- # echo 2
00:14:54.643    10:11:49 sma.sma_plugins -- scripts/common.sh@366 -- # ver2[v]=2
00:14:54.643    10:11:49 sma.sma_plugins -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:14:54.643    10:11:49 sma.sma_plugins -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:14:54.643    10:11:49 sma.sma_plugins -- scripts/common.sh@368 -- # return 0
00:14:54.643    10:11:49 sma.sma_plugins -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:54.643    10:11:49 sma.sma_plugins -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:14:54.643  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:54.643  		--rc genhtml_branch_coverage=1
00:14:54.643  		--rc genhtml_function_coverage=1
00:14:54.643  		--rc genhtml_legend=1
00:14:54.643  		--rc geninfo_all_blocks=1
00:14:54.643  		--rc geninfo_unexecuted_blocks=1
00:14:54.643  		
00:14:54.643  		'
00:14:54.643    10:11:49 sma.sma_plugins -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:14:54.643  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:54.643  		--rc genhtml_branch_coverage=1
00:14:54.643  		--rc genhtml_function_coverage=1
00:14:54.643  		--rc genhtml_legend=1
00:14:54.643  		--rc geninfo_all_blocks=1
00:14:54.643  		--rc geninfo_unexecuted_blocks=1
00:14:54.643  		
00:14:54.643  		'
00:14:54.643    10:11:49 sma.sma_plugins -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:14:54.643  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:54.643  		--rc genhtml_branch_coverage=1
00:14:54.643  		--rc genhtml_function_coverage=1
00:14:54.643  		--rc genhtml_legend=1
00:14:54.643  		--rc geninfo_all_blocks=1
00:14:54.643  		--rc geninfo_unexecuted_blocks=1
00:14:54.643  		
00:14:54.643  		'
00:14:54.643    10:11:49 sma.sma_plugins -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:14:54.643  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:54.643  		--rc genhtml_branch_coverage=1
00:14:54.643  		--rc genhtml_function_coverage=1
00:14:54.643  		--rc genhtml_legend=1
00:14:54.643  		--rc geninfo_all_blocks=1
00:14:54.643  		--rc geninfo_unexecuted_blocks=1
00:14:54.643  		
00:14:54.643  		'
00:14:54.643   10:11:49 sma.sma_plugins -- sma/plugins.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:14:54.643   10:11:49 sma.sma_plugins -- sma/plugins.sh@28 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:14:54.643   10:11:49 sma.sma_plugins -- sma/plugins.sh@31 -- # tgtpid=1811824
00:14:54.643   10:11:49 sma.sma_plugins -- sma/plugins.sh@30 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:14:54.643   10:11:49 sma.sma_plugins -- sma/plugins.sh@43 -- # smapid=1811825
00:14:54.643   10:11:49 sma.sma_plugins -- sma/plugins.sh@45 -- # sma_waitforlisten
00:14:54.643   10:11:49 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:14:54.643   10:11:49 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:14:54.643   10:11:49 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:14:54.643   10:11:49 sma.sma_plugins -- sma/plugins.sh@34 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:14:54.643    10:11:49 sma.sma_plugins -- sma/plugins.sh@34 -- # cat
00:14:54.643   10:11:49 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:14:54.643   10:11:49 sma.sma_plugins -- sma/plugins.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:14:54.643   10:11:49 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:14:54.643   10:11:49 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:14:54.643  [2024-11-20 10:11:49.721349] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:14:54.643  [2024-11-20 10:11:49.721474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1811824 ]
00:14:54.902  EAL: No free 2048 kB hugepages reported on node 1
00:14:54.902  [2024-11-20 10:11:49.852566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:54.902  [2024-11-20 10:11:49.965967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:14:55.836   10:11:50 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:14:55.836   10:11:50 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:14:55.836   10:11:50 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:14:55.836   10:11:50 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:14:55.836  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:55.836  I0000 00:00:1732093910.867708 1811825 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:56.769   10:11:51 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:14:56.769   10:11:51 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:14:56.769   10:11:51 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:14:56.769   10:11:51 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:14:56.769    10:11:51 sma.sma_plugins -- sma/plugins.sh@47 -- # create_device nvme
00:14:56.769    10:11:51 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:56.769    10:11:51 sma.sma_plugins -- sma/plugins.sh@47 -- # jq -r .handle
00:14:57.027  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:57.027  I0000 00:00:1732093911.939551 1812122 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:57.027  I0000 00:00:1732093911.941605 1812122 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:57.027   10:11:51 sma.sma_plugins -- sma/plugins.sh@47 -- # [[ nvme:plugin1-device1:nop == \n\v\m\e\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\1\:\n\o\p ]]
00:14:57.027    10:11:51 sma.sma_plugins -- sma/plugins.sh@48 -- # create_device nvmf_tcp
00:14:57.027    10:11:51 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:57.027    10:11:51 sma.sma_plugins -- sma/plugins.sh@48 -- # jq -r .handle
00:14:57.285  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:57.285  I0000 00:00:1732093912.203708 1812148 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:57.285  I0000 00:00:1732093912.205616 1812148 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:57.285   10:11:52 sma.sma_plugins -- sma/plugins.sh@48 -- # [[ nvmf_tcp:plugin1-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\2\:\n\o\p ]]
00:14:57.285   10:11:52 sma.sma_plugins -- sma/plugins.sh@50 -- # killprocess 1811825
00:14:57.285   10:11:52 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 1811825 ']'
00:14:57.285   10:11:52 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 1811825
00:14:57.285    10:11:52 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:14:57.285   10:11:52 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:57.285    10:11:52 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1811825
00:14:57.285   10:11:52 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:14:57.285   10:11:52 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:14:57.285   10:11:52 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1811825'
00:14:57.285  killing process with pid 1811825
00:14:57.285   10:11:52 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 1811825
00:14:57.285   10:11:52 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 1811825
00:14:57.285   10:11:52 sma.sma_plugins -- sma/plugins.sh@61 -- # smapid=1812176
00:14:57.286   10:11:52 sma.sma_plugins -- sma/plugins.sh@62 -- # sma_waitforlisten
00:14:57.286   10:11:52 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:14:57.286   10:11:52 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:14:57.286   10:11:52 sma.sma_plugins -- sma/plugins.sh@53 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:14:57.286   10:11:52 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:14:57.286   10:11:52 sma.sma_plugins -- sma/plugins.sh@53 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:14:57.286   10:11:52 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:14:57.286   10:11:52 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:14:57.286    10:11:52 sma.sma_plugins -- sma/plugins.sh@53 -- # cat
00:14:57.286   10:11:52 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:14:57.544  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:57.544  I0000 00:00:1732093912.563087 1812176 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:58.478   10:11:53 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:14:58.478   10:11:53 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:14:58.478   10:11:53 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:14:58.478   10:11:53 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:14:58.478    10:11:53 sma.sma_plugins -- sma/plugins.sh@64 -- # create_device nvmf_tcp
00:14:58.478    10:11:53 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:58.478    10:11:53 sma.sma_plugins -- sma/plugins.sh@64 -- # jq -r .handle
00:14:58.736  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:58.736  I0000 00:00:1732093913.609804 1812341 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:58.736  I0000 00:00:1732093913.611738 1812341 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:58.736   10:11:53 sma.sma_plugins -- sma/plugins.sh@64 -- # [[ nvmf_tcp:plugin1-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\2\:\n\o\p ]]
00:14:58.736   10:11:53 sma.sma_plugins -- sma/plugins.sh@65 -- # NOT create_device nvme
00:14:58.736   10:11:53 sma.sma_plugins -- common/autotest_common.sh@652 -- # local es=0
00:14:58.736   10:11:53 sma.sma_plugins -- common/autotest_common.sh@654 -- # valid_exec_arg create_device nvme
00:14:58.736   10:11:53 sma.sma_plugins -- common/autotest_common.sh@640 -- # local arg=create_device
00:14:58.736   10:11:53 sma.sma_plugins -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:58.736    10:11:53 sma.sma_plugins -- common/autotest_common.sh@644 -- # type -t create_device
00:14:58.736   10:11:53 sma.sma_plugins -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:58.736   10:11:53 sma.sma_plugins -- common/autotest_common.sh@655 -- # create_device nvme
00:14:58.736   10:11:53 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:58.994  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:58.994  I0000 00:00:1732093913.880099 1812451 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:58.994  I0000 00:00:1732093913.881872 1812451 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:58.994  Traceback (most recent call last):
00:14:58.994    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:14:58.994      main(sys.argv[1:])
00:14:58.994    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:14:58.994      result = client.call(request['method'], request.get('params', {}))
00:14:58.994               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:14:58.994    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:14:58.994      response = func(request=json_format.ParseDict(params, input()))
00:14:58.994                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:14:58.994    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:14:58.994      return _end_unary_response_blocking(state, call, False, None)
00:14:58.994             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:14:58.994    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:14:58.994      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:14:58.994      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:14:58.994  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:14:58.994  	status = StatusCode.INVALID_ARGUMENT
00:14:58.994  	details = "Unsupported device type"
00:14:58.994  	debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:8080 {grpc_message:"Unsupported device type", grpc_status:3, created_time:"2024-11-20T10:11:53.884247636+01:00"}"
00:14:58.994  >
00:14:58.994   10:11:53 sma.sma_plugins -- common/autotest_common.sh@655 -- # es=1
00:14:58.994   10:11:53 sma.sma_plugins -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:14:58.994   10:11:53 sma.sma_plugins -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:14:58.994   10:11:53 sma.sma_plugins -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:14:58.994   10:11:53 sma.sma_plugins -- sma/plugins.sh@67 -- # killprocess 1812176
00:14:58.994   10:11:53 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 1812176 ']'
00:14:58.994   10:11:53 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 1812176
00:14:58.994    10:11:53 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:14:58.994   10:11:53 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:58.994    10:11:53 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1812176
00:14:58.994   10:11:53 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:14:58.994   10:11:53 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:14:58.994   10:11:53 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1812176'
00:14:58.994  killing process with pid 1812176
00:14:58.994   10:11:53 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 1812176
00:14:58.994   10:11:53 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 1812176
00:14:58.994   10:11:53 sma.sma_plugins -- sma/plugins.sh@80 -- # smapid=1812520
00:14:58.994   10:11:53 sma.sma_plugins -- sma/plugins.sh@81 -- # sma_waitforlisten
00:14:58.994   10:11:53 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:14:58.994   10:11:53 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:14:58.994   10:11:53 sma.sma_plugins -- sma/plugins.sh@70 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:14:58.994    10:11:53 sma.sma_plugins -- sma/plugins.sh@70 -- # cat
00:14:58.994   10:11:53 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:14:58.994   10:11:53 sma.sma_plugins -- sma/plugins.sh@70 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:14:58.994   10:11:53 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:14:58.994   10:11:53 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:14:58.994   10:11:54 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:14:59.254  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:59.254  I0000 00:00:1732093914.237179 1812520 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:00.187   10:11:55 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:15:00.187   10:11:55 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:00.187   10:11:55 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:00.187   10:11:55 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:15:00.187    10:11:55 sma.sma_plugins -- sma/plugins.sh@83 -- # create_device nvme
00:15:00.187    10:11:55 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:00.187    10:11:55 sma.sma_plugins -- sma/plugins.sh@83 -- # jq -r .handle
00:15:00.187  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:00.187  I0000 00:00:1732093915.277282 1812681 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:00.187  I0000 00:00:1732093915.279064 1812681 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:00.187   10:11:55 sma.sma_plugins -- sma/plugins.sh@83 -- # [[ nvme:plugin1-device1:nop == \n\v\m\e\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\1\:\n\o\p ]]
00:15:00.445    10:11:55 sma.sma_plugins -- sma/plugins.sh@84 -- # create_device nvmf_tcp
00:15:00.445    10:11:55 sma.sma_plugins -- sma/plugins.sh@84 -- # jq -r .handle
00:15:00.445    10:11:55 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:00.445  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:00.445  I0000 00:00:1732093915.541447 1812712 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:00.445  I0000 00:00:1732093915.543221 1812712 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:00.703   10:11:55 sma.sma_plugins -- sma/plugins.sh@84 -- # [[ nvmf_tcp:plugin1-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\2\:\n\o\p ]]
00:15:00.703   10:11:55 sma.sma_plugins -- sma/plugins.sh@86 -- # killprocess 1812520
00:15:00.703   10:11:55 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 1812520 ']'
00:15:00.703   10:11:55 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 1812520
00:15:00.703    10:11:55 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:15:00.703   10:11:55 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:00.703    10:11:55 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1812520
00:15:00.703   10:11:55 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:15:00.703   10:11:55 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:15:00.703   10:11:55 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1812520'
00:15:00.703  killing process with pid 1812520
00:15:00.703   10:11:55 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 1812520
00:15:00.703   10:11:55 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 1812520
00:15:00.703   10:11:55 sma.sma_plugins -- sma/plugins.sh@99 -- # smapid=1812747
00:15:00.703   10:11:55 sma.sma_plugins -- sma/plugins.sh@100 -- # sma_waitforlisten
00:15:00.703   10:11:55 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:15:00.703   10:11:55 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:15:00.703   10:11:55 sma.sma_plugins -- sma/plugins.sh@89 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:15:00.703   10:11:55 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:15:00.703   10:11:55 sma.sma_plugins -- sma/plugins.sh@89 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:15:00.703    10:11:55 sma.sma_plugins -- sma/plugins.sh@89 -- # cat
00:15:00.703   10:11:55 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:00.703   10:11:55 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:00.703   10:11:55 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:15:00.961  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:00.961  I0000 00:00:1732093915.896738 1812747 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:01.894   10:11:56 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:15:01.894   10:11:56 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:01.894   10:11:56 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:01.894   10:11:56 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:15:01.894    10:11:56 sma.sma_plugins -- sma/plugins.sh@102 -- # create_device nvme
00:15:01.894    10:11:56 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:01.894    10:11:56 sma.sma_plugins -- sma/plugins.sh@102 -- # jq -r .handle
00:15:01.894  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:01.894  I0000 00:00:1732093916.935677 1812908 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:01.894  I0000 00:00:1732093916.937525 1812908 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:01.894   10:11:56 sma.sma_plugins -- sma/plugins.sh@102 -- # [[ nvme:plugin2-device1:nop == \n\v\m\e\:\p\l\u\g\i\n\2\-\d\e\v\i\c\e\1\:\n\o\p ]]
00:15:01.894    10:11:56 sma.sma_plugins -- sma/plugins.sh@103 -- # create_device nvmf_tcp
00:15:01.894    10:11:56 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:01.894    10:11:56 sma.sma_plugins -- sma/plugins.sh@103 -- # jq -r .handle
00:15:02.152  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:02.152  I0000 00:00:1732093917.206780 1812940 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:02.152  I0000 00:00:1732093917.208696 1812940 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:02.152   10:11:57 sma.sma_plugins -- sma/plugins.sh@103 -- # [[ nvmf_tcp:plugin2-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\2\-\d\e\v\i\c\e\2\:\n\o\p ]]
00:15:02.152   10:11:57 sma.sma_plugins -- sma/plugins.sh@105 -- # killprocess 1812747
00:15:02.152   10:11:57 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 1812747 ']'
00:15:02.152   10:11:57 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 1812747
00:15:02.152    10:11:57 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:15:02.152   10:11:57 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:02.152    10:11:57 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1812747
00:15:02.152   10:11:57 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:15:02.152   10:11:57 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:15:02.152   10:11:57 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1812747'
00:15:02.152  killing process with pid 1812747
00:15:02.152   10:11:57 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 1812747
00:15:02.152   10:11:57 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 1812747
00:15:02.410   10:11:57 sma.sma_plugins -- sma/plugins.sh@118 -- # smapid=1813082
00:15:02.410   10:11:57 sma.sma_plugins -- sma/plugins.sh@119 -- # sma_waitforlisten
00:15:02.410   10:11:57 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:15:02.410   10:11:57 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:15:02.410   10:11:57 sma.sma_plugins -- sma/plugins.sh@108 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:15:02.410   10:11:57 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:15:02.410   10:11:57 sma.sma_plugins -- sma/plugins.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:15:02.410   10:11:57 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:02.410   10:11:57 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:02.410    10:11:57 sma.sma_plugins -- sma/plugins.sh@108 -- # cat
00:15:02.410   10:11:57 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:15:02.668  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:02.668  I0000 00:00:1732093917.561606 1813082 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:03.233   10:11:58 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:15:03.233   10:11:58 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:03.233   10:11:58 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:03.491   10:11:58 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:15:03.491    10:11:58 sma.sma_plugins -- sma/plugins.sh@121 -- # create_device nvme
00:15:03.491    10:11:58 sma.sma_plugins -- sma/plugins.sh@121 -- # jq -r .handle
00:15:03.491    10:11:58 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:03.491  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:03.491  I0000 00:00:1732093918.603321 1813248 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:03.491  I0000 00:00:1732093918.605087 1813248 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:03.748   10:11:58 sma.sma_plugins -- sma/plugins.sh@121 -- # [[ nvme:plugin1-device1:nop == \n\v\m\e\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\1\:\n\o\p ]]
00:15:03.748    10:11:58 sma.sma_plugins -- sma/plugins.sh@122 -- # create_device nvmf_tcp
00:15:03.748    10:11:58 sma.sma_plugins -- sma/plugins.sh@122 -- # jq -r .handle
00:15:03.748    10:11:58 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:03.748  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:03.748  I0000 00:00:1732093918.856729 1813285 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:03.748  I0000 00:00:1732093918.858697 1813285 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:04.006   10:11:58 sma.sma_plugins -- sma/plugins.sh@122 -- # [[ nvmf_tcp:plugin2-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\2\-\d\e\v\i\c\e\2\:\n\o\p ]]
00:15:04.006   10:11:58 sma.sma_plugins -- sma/plugins.sh@124 -- # killprocess 1813082
00:15:04.006   10:11:58 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 1813082 ']'
00:15:04.006   10:11:58 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 1813082
00:15:04.006    10:11:58 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:15:04.006   10:11:58 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:04.006    10:11:58 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1813082
00:15:04.006   10:11:58 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:15:04.006   10:11:58 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:15:04.006   10:11:58 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1813082'
00:15:04.006  killing process with pid 1813082
00:15:04.006   10:11:58 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 1813082
00:15:04.006   10:11:58 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 1813082
00:15:04.006   10:11:58 sma.sma_plugins -- sma/plugins.sh@134 -- # smapid=1813311
00:15:04.006   10:11:58 sma.sma_plugins -- sma/plugins.sh@135 -- # sma_waitforlisten
00:15:04.006   10:11:58 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:15:04.006   10:11:58 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:15:04.006   10:11:58 sma.sma_plugins -- sma/plugins.sh@127 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:15:04.006    10:11:58 sma.sma_plugins -- sma/plugins.sh@127 -- # cat
00:15:04.006   10:11:58 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:15:04.006   10:11:58 sma.sma_plugins -- sma/plugins.sh@127 -- # SMA_PLUGINS=plugin1:plugin2
00:15:04.006   10:11:58 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:04.006   10:11:58 sma.sma_plugins -- sma/plugins.sh@127 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:15:04.006   10:11:58 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:04.006   10:11:58 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:15:04.264  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:04.264  I0000 00:00:1732093919.199482 1813311 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:05.198   10:11:59 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:15:05.198   10:11:59 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:05.198   10:11:59 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:05.198   10:12:00 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:15:05.198    10:12:00 sma.sma_plugins -- sma/plugins.sh@137 -- # create_device nvme
00:15:05.198    10:12:00 sma.sma_plugins -- sma/plugins.sh@137 -- # jq -r .handle
00:15:05.198    10:12:00 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:05.198  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:05.198  I0000 00:00:1732093920.261990 1813484 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:05.198  I0000 00:00:1732093920.263848 1813484 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:05.198   10:12:00 sma.sma_plugins -- sma/plugins.sh@137 -- # [[ nvme:plugin1-device1:nop == \n\v\m\e\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\1\:\n\o\p ]]
00:15:05.198    10:12:00 sma.sma_plugins -- sma/plugins.sh@138 -- # create_device nvmf_tcp
00:15:05.198    10:12:00 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:05.198    10:12:00 sma.sma_plugins -- sma/plugins.sh@138 -- # jq -r .handle
00:15:05.457  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:05.457  I0000 00:00:1732093920.551941 1813530 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:05.457  I0000 00:00:1732093920.553774 1813530 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:05.716   10:12:00 sma.sma_plugins -- sma/plugins.sh@138 -- # [[ nvmf_tcp:plugin2-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\2\-\d\e\v\i\c\e\2\:\n\o\p ]]
00:15:05.716   10:12:00 sma.sma_plugins -- sma/plugins.sh@140 -- # killprocess 1813311
00:15:05.716   10:12:00 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 1813311 ']'
00:15:05.716   10:12:00 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 1813311
00:15:05.716    10:12:00 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:15:05.716   10:12:00 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:05.716    10:12:00 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1813311
00:15:05.716   10:12:00 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:15:05.716   10:12:00 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:15:05.716   10:12:00 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1813311'
00:15:05.716  killing process with pid 1813311
00:15:05.716   10:12:00 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 1813311
00:15:05.716   10:12:00 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 1813311
00:15:05.716   10:12:00 sma.sma_plugins -- sma/plugins.sh@152 -- # smapid=1813695
00:15:05.716   10:12:00 sma.sma_plugins -- sma/plugins.sh@153 -- # sma_waitforlisten
00:15:05.716   10:12:00 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:15:05.716   10:12:00 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:15:05.716   10:12:00 sma.sma_plugins -- sma/plugins.sh@143 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:15:05.716   10:12:00 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:15:05.716   10:12:00 sma.sma_plugins -- sma/plugins.sh@143 -- # SMA_PLUGINS=plugin1
00:15:05.716    10:12:00 sma.sma_plugins -- sma/plugins.sh@143 -- # cat
00:15:05.716   10:12:00 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:05.716   10:12:00 sma.sma_plugins -- sma/plugins.sh@143 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:15:05.716   10:12:00 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:05.716   10:12:00 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:15:05.974  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:05.974  I0000 00:00:1732093920.927557 1813695 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:06.906   10:12:01 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:15:06.906   10:12:01 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:06.906   10:12:01 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:06.906   10:12:01 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:15:06.906    10:12:01 sma.sma_plugins -- sma/plugins.sh@155 -- # create_device nvme
00:15:06.906    10:12:01 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:06.906    10:12:01 sma.sma_plugins -- sma/plugins.sh@155 -- # jq -r .handle
00:15:06.906  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:06.906  I0000 00:00:1732093921.951596 1813913 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:06.906  I0000 00:00:1732093921.953465 1813913 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:06.906   10:12:01 sma.sma_plugins -- sma/plugins.sh@155 -- # [[ nvme:plugin1-device1:nop == \n\v\m\e\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\1\:\n\o\p ]]
00:15:06.906    10:12:01 sma.sma_plugins -- sma/plugins.sh@156 -- # create_device nvmf_tcp
00:15:06.906    10:12:01 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:06.906    10:12:01 sma.sma_plugins -- sma/plugins.sh@156 -- # jq -r .handle
00:15:07.164  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:07.164  I0000 00:00:1732093922.215940 1813958 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:07.164  I0000 00:00:1732093922.218000 1813958 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:07.164   10:12:02 sma.sma_plugins -- sma/plugins.sh@156 -- # [[ nvmf_tcp:plugin2-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\2\-\d\e\v\i\c\e\2\:\n\o\p ]]
00:15:07.164   10:12:02 sma.sma_plugins -- sma/plugins.sh@158 -- # killprocess 1813695
00:15:07.164   10:12:02 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 1813695 ']'
00:15:07.164   10:12:02 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 1813695
00:15:07.164    10:12:02 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:15:07.164   10:12:02 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:07.164    10:12:02 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1813695
00:15:07.164   10:12:02 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:15:07.164   10:12:02 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:15:07.164   10:12:02 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1813695'
00:15:07.164  killing process with pid 1813695
00:15:07.164   10:12:02 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 1813695
00:15:07.164   10:12:02 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 1813695
00:15:07.422   10:12:02 sma.sma_plugins -- sma/plugins.sh@161 -- # crypto_engines=(crypto-plugin1 crypto-plugin2)
00:15:07.422   10:12:02 sma.sma_plugins -- sma/plugins.sh@162 -- # for crypto in "${crypto_engines[@]}"
00:15:07.422   10:12:02 sma.sma_plugins -- sma/plugins.sh@175 -- # smapid=1813987
00:15:07.422   10:12:02 sma.sma_plugins -- sma/plugins.sh@176 -- # sma_waitforlisten
00:15:07.422   10:12:02 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:15:07.422   10:12:02 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:15:07.422   10:12:02 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:15:07.422   10:12:02 sma.sma_plugins -- sma/plugins.sh@163 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:15:07.422   10:12:02 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:07.422   10:12:02 sma.sma_plugins -- sma/plugins.sh@163 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:15:07.422   10:12:02 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:07.422    10:12:02 sma.sma_plugins -- sma/plugins.sh@163 -- # cat
00:15:07.422   10:12:02 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:15:07.680  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:07.680  I0000 00:00:1732093922.576684 1813987 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:08.245   10:12:03 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:15:08.245   10:12:03 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:08.245   10:12:03 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:08.502   10:12:03 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:15:08.502    10:12:03 sma.sma_plugins -- sma/plugins.sh@178 -- # create_device nvme
00:15:08.502    10:12:03 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:08.502    10:12:03 sma.sma_plugins -- sma/plugins.sh@178 -- # jq -r .handle
00:15:08.502  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:08.503  I0000 00:00:1732093923.616924 1814152 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:08.503  I0000 00:00:1732093923.618926 1814152 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:08.760   10:12:03 sma.sma_plugins -- sma/plugins.sh@178 -- # [[ nvme:plugin1-device1:crypto-plugin1 == nvme:plugin1-device1:crypto-plugin1 ]]
00:15:08.760    10:12:03 sma.sma_plugins -- sma/plugins.sh@179 -- # create_device nvmf_tcp
00:15:08.760    10:12:03 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:08.760    10:12:03 sma.sma_plugins -- sma/plugins.sh@179 -- # jq -r .handle
00:15:09.019  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:09.019  I0000 00:00:1732093923.888074 1814183 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:09.019  I0000 00:00:1732093923.889932 1814183 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:09.019   10:12:03 sma.sma_plugins -- sma/plugins.sh@179 -- # [[ nvmf_tcp:plugin2-device2:crypto-plugin1 == nvmf_tcp:plugin2-device2:crypto-plugin1 ]]
00:15:09.019   10:12:03 sma.sma_plugins -- sma/plugins.sh@181 -- # killprocess 1813987
00:15:09.019   10:12:03 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 1813987 ']'
00:15:09.019   10:12:03 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 1813987
00:15:09.019    10:12:03 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:15:09.019   10:12:03 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:09.019    10:12:03 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1813987
00:15:09.019   10:12:03 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:15:09.019   10:12:03 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:15:09.019   10:12:03 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1813987'
00:15:09.019  killing process with pid 1813987
00:15:09.019   10:12:03 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 1813987
00:15:09.019   10:12:03 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 1813987
00:15:09.019   10:12:03 sma.sma_plugins -- sma/plugins.sh@162 -- # for crypto in "${crypto_engines[@]}"
00:15:09.019   10:12:03 sma.sma_plugins -- sma/plugins.sh@175 -- # smapid=1814331
00:15:09.019   10:12:03 sma.sma_plugins -- sma/plugins.sh@176 -- # sma_waitforlisten
00:15:09.019   10:12:03 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:15:09.019   10:12:03 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:15:09.019   10:12:03 sma.sma_plugins -- sma/plugins.sh@163 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:15:09.019   10:12:03 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:15:09.019   10:12:03 sma.sma_plugins -- sma/plugins.sh@163 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:15:09.019   10:12:03 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:09.019    10:12:03 sma.sma_plugins -- sma/plugins.sh@163 -- # cat
00:15:09.019   10:12:03 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:09.019   10:12:04 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:15:09.276  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:09.276  I0000 00:00:1732093924.237170 1814331 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:10.210   10:12:05 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:15:10.210   10:12:05 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:10.210   10:12:05 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:10.210   10:12:05 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:15:10.210    10:12:05 sma.sma_plugins -- sma/plugins.sh@178 -- # create_device nvme
00:15:10.210    10:12:05 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:10.210    10:12:05 sma.sma_plugins -- sma/plugins.sh@178 -- # jq -r .handle
00:15:10.210  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:10.210  I0000 00:00:1732093925.282575 1814439 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:10.210  I0000 00:00:1732093925.284707 1814439 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:10.210   10:12:05 sma.sma_plugins -- sma/plugins.sh@178 -- # [[ nvme:plugin1-device1:crypto-plugin2 == nvme:plugin1-device1:crypto-plugin2 ]]
00:15:10.210    10:12:05 sma.sma_plugins -- sma/plugins.sh@179 -- # create_device nvmf_tcp
00:15:10.210    10:12:05 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:10.210    10:12:05 sma.sma_plugins -- sma/plugins.sh@179 -- # jq -r .handle
00:15:10.468  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:10.468  I0000 00:00:1732093925.553264 1814519 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:10.468  I0000 00:00:1732093925.555112 1814519 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:10.468   10:12:05 sma.sma_plugins -- sma/plugins.sh@179 -- # [[ nvmf_tcp:plugin2-device2:crypto-plugin2 == nvmf_tcp:plugin2-device2:crypto-plugin2 ]]
00:15:10.468   10:12:05 sma.sma_plugins -- sma/plugins.sh@181 -- # killprocess 1814331
00:15:10.468   10:12:05 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 1814331 ']'
00:15:10.468   10:12:05 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 1814331
00:15:10.468    10:12:05 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:15:10.468   10:12:05 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:10.468    10:12:05 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1814331
00:15:10.727   10:12:05 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:15:10.727   10:12:05 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:15:10.727   10:12:05 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1814331'
00:15:10.727  killing process with pid 1814331
00:15:10.727   10:12:05 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 1814331
00:15:10.727   10:12:05 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 1814331
00:15:10.727   10:12:05 sma.sma_plugins -- sma/plugins.sh@184 -- # cleanup
00:15:10.727   10:12:05 sma.sma_plugins -- sma/plugins.sh@13 -- # killprocess 1811824
00:15:10.727   10:12:05 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 1811824 ']'
00:15:10.727   10:12:05 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 1811824
00:15:10.727    10:12:05 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:15:10.727   10:12:05 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:10.727    10:12:05 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1811824
00:15:10.727   10:12:05 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:15:10.727   10:12:05 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:15:10.727   10:12:05 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1811824'
00:15:10.727  killing process with pid 1811824
00:15:10.727   10:12:05 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 1811824
00:15:10.727   10:12:05 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 1811824
00:15:12.626   10:12:07 sma.sma_plugins -- sma/plugins.sh@14 -- # killprocess 1814331
00:15:12.626   10:12:07 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 1814331 ']'
00:15:12.626   10:12:07 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 1814331
00:15:12.626  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1814331) - No such process
00:15:12.626   10:12:07 sma.sma_plugins -- common/autotest_common.sh@981 -- # echo 'Process with pid 1814331 is not found'
00:15:12.626  Process with pid 1814331 is not found
00:15:12.626   10:12:07 sma.sma_plugins -- sma/plugins.sh@185 -- # trap - SIGINT SIGTERM EXIT
00:15:12.626  
00:15:12.626  real	0m18.239s
00:15:12.626  user	0m25.060s
00:15:12.626  sys	0m2.090s
00:15:12.626   10:12:07 sma.sma_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:12.626   10:12:07 sma.sma_plugins -- common/autotest_common.sh@10 -- # set +x
00:15:12.626  ************************************
00:15:12.626  END TEST sma_plugins
00:15:12.626  ************************************
00:15:12.626   10:12:07 sma -- sma/sma.sh@14 -- # run_test sma_discovery /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/discovery.sh
00:15:12.626   10:12:07 sma -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:15:12.626   10:12:07 sma -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:12.626   10:12:07 sma -- common/autotest_common.sh@10 -- # set +x
00:15:12.884  ************************************
00:15:12.884  START TEST sma_discovery
00:15:12.884  ************************************
00:15:12.884   10:12:07 sma.sma_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/discovery.sh
00:15:12.884  * Looking for test storage...
00:15:12.884  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:15:12.884    10:12:07 sma.sma_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:15:12.884     10:12:07 sma.sma_discovery -- common/autotest_common.sh@1693 -- # lcov --version
00:15:12.884     10:12:07 sma.sma_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:15:12.884    10:12:07 sma.sma_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:15:12.884    10:12:07 sma.sma_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:15:12.884    10:12:07 sma.sma_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l
00:15:12.884    10:12:07 sma.sma_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l
00:15:12.884    10:12:07 sma.sma_discovery -- scripts/common.sh@336 -- # IFS=.-:
00:15:12.884    10:12:07 sma.sma_discovery -- scripts/common.sh@336 -- # read -ra ver1
00:15:12.884    10:12:07 sma.sma_discovery -- scripts/common.sh@337 -- # IFS=.-:
00:15:12.884    10:12:07 sma.sma_discovery -- scripts/common.sh@337 -- # read -ra ver2
00:15:12.884    10:12:07 sma.sma_discovery -- scripts/common.sh@338 -- # local 'op=<'
00:15:12.884    10:12:07 sma.sma_discovery -- scripts/common.sh@340 -- # ver1_l=2
00:15:12.884    10:12:07 sma.sma_discovery -- scripts/common.sh@341 -- # ver2_l=1
00:15:12.884    10:12:07 sma.sma_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:15:12.884    10:12:07 sma.sma_discovery -- scripts/common.sh@344 -- # case "$op" in
00:15:12.884    10:12:07 sma.sma_discovery -- scripts/common.sh@345 -- # : 1
00:15:12.884    10:12:07 sma.sma_discovery -- scripts/common.sh@364 -- # (( v = 0 ))
00:15:12.884    10:12:07 sma.sma_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:15:12.884     10:12:07 sma.sma_discovery -- scripts/common.sh@365 -- # decimal 1
00:15:12.884     10:12:07 sma.sma_discovery -- scripts/common.sh@353 -- # local d=1
00:15:12.884     10:12:07 sma.sma_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:12.884     10:12:07 sma.sma_discovery -- scripts/common.sh@355 -- # echo 1
00:15:12.884    10:12:07 sma.sma_discovery -- scripts/common.sh@365 -- # ver1[v]=1
00:15:12.884     10:12:07 sma.sma_discovery -- scripts/common.sh@366 -- # decimal 2
00:15:12.884     10:12:07 sma.sma_discovery -- scripts/common.sh@353 -- # local d=2
00:15:12.884     10:12:07 sma.sma_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:15:12.884     10:12:07 sma.sma_discovery -- scripts/common.sh@355 -- # echo 2
00:15:12.884    10:12:07 sma.sma_discovery -- scripts/common.sh@366 -- # ver2[v]=2
00:15:12.884    10:12:07 sma.sma_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:15:12.884    10:12:07 sma.sma_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:15:12.884    10:12:07 sma.sma_discovery -- scripts/common.sh@368 -- # return 0
00:15:12.884    10:12:07 sma.sma_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:15:12.884    10:12:07 sma.sma_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:15:12.884  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:12.884  		--rc genhtml_branch_coverage=1
00:15:12.884  		--rc genhtml_function_coverage=1
00:15:12.884  		--rc genhtml_legend=1
00:15:12.884  		--rc geninfo_all_blocks=1
00:15:12.884  		--rc geninfo_unexecuted_blocks=1
00:15:12.884  		
00:15:12.884  		'
00:15:12.884    10:12:07 sma.sma_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:15:12.884  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:12.884  		--rc genhtml_branch_coverage=1
00:15:12.884  		--rc genhtml_function_coverage=1
00:15:12.884  		--rc genhtml_legend=1
00:15:12.884  		--rc geninfo_all_blocks=1
00:15:12.885  		--rc geninfo_unexecuted_blocks=1
00:15:12.885  		
00:15:12.885  		'
00:15:12.885    10:12:07 sma.sma_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:15:12.885  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:12.885  		--rc genhtml_branch_coverage=1
00:15:12.885  		--rc genhtml_function_coverage=1
00:15:12.885  		--rc genhtml_legend=1
00:15:12.885  		--rc geninfo_all_blocks=1
00:15:12.885  		--rc geninfo_unexecuted_blocks=1
00:15:12.885  		
00:15:12.885  		'
00:15:12.885    10:12:07 sma.sma_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:15:12.885  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:12.885  		--rc genhtml_branch_coverage=1
00:15:12.885  		--rc genhtml_function_coverage=1
00:15:12.885  		--rc genhtml_legend=1
00:15:12.885  		--rc geninfo_all_blocks=1
00:15:12.885  		--rc geninfo_unexecuted_blocks=1
00:15:12.885  		
00:15:12.885  		'
00:15:12.885   10:12:07 sma.sma_discovery -- sma/discovery.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:15:12.885   10:12:07 sma.sma_discovery -- sma/discovery.sh@12 -- # sma_py=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:12.885   10:12:07 sma.sma_discovery -- sma/discovery.sh@13 -- # rpc_py=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:15:12.885   10:12:07 sma.sma_discovery -- sma/discovery.sh@15 -- # t1sock=/var/tmp/spdk.sock1
00:15:12.885   10:12:07 sma.sma_discovery -- sma/discovery.sh@16 -- # t2sock=/var/tmp/spdk.sock2
00:15:12.885   10:12:07 sma.sma_discovery -- sma/discovery.sh@17 -- # invalid_port=8008
00:15:12.885   10:12:07 sma.sma_discovery -- sma/discovery.sh@18 -- # t1dscport=8009
00:15:12.885   10:12:07 sma.sma_discovery -- sma/discovery.sh@19 -- # t2dscport1=8010
00:15:12.885   10:12:07 sma.sma_discovery -- sma/discovery.sh@20 -- # t2dscport2=8011
00:15:12.885   10:12:07 sma.sma_discovery -- sma/discovery.sh@21 -- # t1nqn=nqn.2016-06.io.spdk:node1
00:15:12.885   10:12:07 sma.sma_discovery -- sma/discovery.sh@22 -- # t2nqn=nqn.2016-06.io.spdk:node2
00:15:12.885   10:12:07 sma.sma_discovery -- sma/discovery.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host0
00:15:12.885   10:12:07 sma.sma_discovery -- sma/discovery.sh@24 -- # cleanup_period=1
00:15:12.885   10:12:07 sma.sma_discovery -- sma/discovery.sh@132 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:15:12.885   10:12:07 sma.sma_discovery -- sma/discovery.sh@136 -- # t1pid=1815128
00:15:12.885   10:12:07 sma.sma_discovery -- sma/discovery.sh@135 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/spdk.sock1 -m 0x1
00:15:12.885   10:12:07 sma.sma_discovery -- sma/discovery.sh@138 -- # t2pid=1815129
00:15:12.885   10:12:07 sma.sma_discovery -- sma/discovery.sh@137 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/spdk.sock2 -m 0x2
00:15:12.885   10:12:07 sma.sma_discovery -- sma/discovery.sh@142 -- # tgtpid=1815130
00:15:12.885   10:12:07 sma.sma_discovery -- sma/discovery.sh@141 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x4
00:15:12.885   10:12:07 sma.sma_discovery -- sma/discovery.sh@153 -- # smapid=1815131
00:15:12.885   10:12:07 sma.sma_discovery -- sma/discovery.sh@155 -- # waitforlisten 1815130
00:15:12.885   10:12:07 sma.sma_discovery -- common/autotest_common.sh@835 -- # '[' -z 1815130 ']'
00:15:12.885   10:12:07 sma.sma_discovery -- sma/discovery.sh@145 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:15:12.885   10:12:07 sma.sma_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:15:12.885    10:12:07 sma.sma_discovery -- sma/discovery.sh@145 -- # cat
00:15:12.885   10:12:07 sma.sma_discovery -- common/autotest_common.sh@840 -- # local max_retries=100
00:15:12.885   10:12:07 sma.sma_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:15:12.885  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:15:12.885   10:12:07 sma.sma_discovery -- common/autotest_common.sh@844 -- # xtrace_disable
00:15:12.885   10:12:07 sma.sma_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:13.143  [2024-11-20 10:12:08.025323] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:15:13.143  [2024-11-20 10:12:08.025323] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:15:13.143  [2024-11-20 10:12:08.025375] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:15:13.143  [2024-11-20 10:12:08.025470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal[2024-11-20 10:12:08.025472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1815129 ]
00:15:13.143  file-prefix=spdk_pid1815130 ]
00:15:13.143  [2024-11-20 10:12:08.025562] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1815128 ]
00:15:13.143  EAL: No free 2048 kB hugepages reported on node 1
00:15:13.143  EAL: No free 2048 kB hugepages reported on node 1
00:15:13.143  EAL: No free 2048 kB hugepages reported on node 1
00:15:13.143  [2024-11-20 10:12:08.189965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:13.143  [2024-11-20 10:12:08.189957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:13.143  [2024-11-20 10:12:08.189982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:13.401  [2024-11-20 10:12:08.317690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:15:13.401  [2024-11-20 10:12:08.331430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:15:13.401  [2024-11-20 10:12:08.344615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:15:14.333  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:14.333  I0000 00:00:1732093929.327482 1815131 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:14.333   10:12:09 sma.sma_discovery -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:15:14.333   10:12:09 sma.sma_discovery -- common/autotest_common.sh@868 -- # return 0
00:15:14.333   10:12:09 sma.sma_discovery -- sma/discovery.sh@156 -- # waitforlisten 1815128 /var/tmp/spdk.sock1
00:15:14.333   10:12:09 sma.sma_discovery -- common/autotest_common.sh@835 -- # '[' -z 1815128 ']'
00:15:14.333   10:12:09 sma.sma_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock1
00:15:14.333   10:12:09 sma.sma_discovery -- common/autotest_common.sh@840 -- # local max_retries=100
00:15:14.333   10:12:09 sma.sma_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock1...'
00:15:14.333  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock1...
00:15:14.333   10:12:09 sma.sma_discovery -- common/autotest_common.sh@844 -- # xtrace_disable
00:15:14.333   10:12:09 sma.sma_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:14.333  [2024-11-20 10:12:09.340892] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:15:14.589   10:12:09 sma.sma_discovery -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:15:14.589   10:12:09 sma.sma_discovery -- common/autotest_common.sh@868 -- # return 0
00:15:14.589   10:12:09 sma.sma_discovery -- sma/discovery.sh@157 -- # waitforlisten 1815129 /var/tmp/spdk.sock2
00:15:14.589   10:12:09 sma.sma_discovery -- common/autotest_common.sh@835 -- # '[' -z 1815129 ']'
00:15:14.589   10:12:09 sma.sma_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock2
00:15:14.589   10:12:09 sma.sma_discovery -- common/autotest_common.sh@840 -- # local max_retries=100
00:15:14.589   10:12:09 sma.sma_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock2...'
00:15:14.589  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock2...
00:15:14.589   10:12:09 sma.sma_discovery -- common/autotest_common.sh@844 -- # xtrace_disable
00:15:14.589   10:12:09 sma.sma_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:14.847   10:12:09 sma.sma_discovery -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:15:14.847   10:12:09 sma.sma_discovery -- common/autotest_common.sh@868 -- # return 0
00:15:14.847    10:12:09 sma.sma_discovery -- sma/discovery.sh@162 -- # uuidgen
00:15:14.847   10:12:09 sma.sma_discovery -- sma/discovery.sh@162 -- # t1uuid=5794b457-f196-4b46-99f5-512c57779f1c
00:15:14.847    10:12:09 sma.sma_discovery -- sma/discovery.sh@163 -- # uuidgen
00:15:14.847   10:12:09 sma.sma_discovery -- sma/discovery.sh@163 -- # t2uuid=71f1e5ce-2660-4c8e-92ed-0737ef8fdd0e
00:15:14.847    10:12:09 sma.sma_discovery -- sma/discovery.sh@164 -- # uuidgen
00:15:14.847   10:12:09 sma.sma_discovery -- sma/discovery.sh@164 -- # t2uuid2=257c4508-e761-414f-80e2-0f0aab2c2e67
00:15:14.847   10:12:09 sma.sma_discovery -- sma/discovery.sh@166 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock1
00:15:15.107  [2024-11-20 10:12:10.147649] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:15:15.107  [2024-11-20 10:12:10.188184] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:15:15.107  [2024-11-20 10:12:10.195948] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 8009 ***
00:15:15.107  null0
00:15:15.107   10:12:10 sma.sma_discovery -- sma/discovery.sh@176 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock2
00:15:15.365  [2024-11-20 10:12:10.461131] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:15:15.623  [2024-11-20 10:12:10.517654] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 ***
00:15:15.623  [2024-11-20 10:12:10.525489] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 8010 ***
00:15:15.623  [2024-11-20 10:12:10.533557] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 8011 ***
00:15:15.623  null0
00:15:15.623  null1
00:15:15.623   10:12:10 sma.sma_discovery -- sma/discovery.sh@190 -- # sma_waitforlisten
00:15:15.623   10:12:10 sma.sma_discovery -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:15:15.623   10:12:10 sma.sma_discovery -- sma/common.sh@8 -- # local sma_port=8080
00:15:15.623   10:12:10 sma.sma_discovery -- sma/common.sh@10 -- # (( i = 0 ))
00:15:15.623   10:12:10 sma.sma_discovery -- sma/common.sh@10 -- # (( i < 5 ))
00:15:15.623   10:12:10 sma.sma_discovery -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:15.623   10:12:10 sma.sma_discovery -- sma/common.sh@12 -- # return 0
00:15:15.623   10:12:10 sma.sma_discovery -- sma/discovery.sh@192 -- # localnqn=nqn.2016-06.io.spdk:local0
00:15:15.623    10:12:10 sma.sma_discovery -- sma/discovery.sh@195 -- # create_device nqn.2016-06.io.spdk:local0
00:15:15.623    10:12:10 sma.sma_discovery -- sma/discovery.sh@195 -- # jq -r .handle
00:15:15.623    10:12:10 sma.sma_discovery -- sma/discovery.sh@69 -- # local nqn=nqn.2016-06.io.spdk:local0
00:15:15.623    10:12:10 sma.sma_discovery -- sma/discovery.sh@70 -- # local volume_id=
00:15:15.623    10:12:10 sma.sma_discovery -- sma/discovery.sh@71 -- # local volume=
00:15:15.623    10:12:10 sma.sma_discovery -- sma/discovery.sh@73 -- # shift
00:15:15.623    10:12:10 sma.sma_discovery -- sma/discovery.sh@74 -- # [[ -n '' ]]
00:15:15.623    10:12:10 sma.sma_discovery -- sma/discovery.sh@78 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:15.880  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:15.880  I0000 00:00:1732093930.838282 1815699 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:15.880  I0000 00:00:1732093930.840139 1815699 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:15.881  [2024-11-20 10:12:10.861874] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 ***
00:15:15.881   10:12:10 sma.sma_discovery -- sma/discovery.sh@195 -- # device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:15:15.881   10:12:10 sma.sma_discovery -- sma/discovery.sh@198 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:16.139  [
00:15:16.139    {
00:15:16.139      "nqn": "nqn.2016-06.io.spdk:local0",
00:15:16.139      "subtype": "NVMe",
00:15:16.139      "listen_addresses": [
00:15:16.139        {
00:15:16.139          "trtype": "TCP",
00:15:16.139          "adrfam": "IPv4",
00:15:16.139          "traddr": "127.0.0.1",
00:15:16.139          "trsvcid": "4419"
00:15:16.139        }
00:15:16.139      ],
00:15:16.139      "allow_any_host": false,
00:15:16.139      "hosts": [],
00:15:16.139      "serial_number": "00000000000000000000",
00:15:16.139      "model_number": "SPDK bdev Controller",
00:15:16.139      "max_namespaces": 32,
00:15:16.139      "min_cntlid": 1,
00:15:16.139      "max_cntlid": 65519,
00:15:16.139      "namespaces": []
00:15:16.139    }
00:15:16.139  ]
00:15:16.139   10:12:11 sma.sma_discovery -- sma/discovery.sh@201 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 5794b457-f196-4b46-99f5-512c57779f1c 8009 8010
00:15:16.139   10:12:11 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:15:16.139   10:12:11 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:15:16.139   10:12:11 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:16.139    10:12:11 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 5794b457-f196-4b46-99f5-512c57779f1c 8009 8010
00:15:16.139    10:12:11 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=5794b457-f196-4b46-99f5-512c57779f1c
00:15:16.139    10:12:11 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:15:16.139    10:12:11 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:15:16.139     10:12:11 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 5794b457-f196-4b46-99f5-512c57779f1c
00:15:16.139     10:12:11 sma.sma_discovery -- sma/common.sh@20 -- # python
00:15:16.139     10:12:11 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8009 8010
00:15:16.139     10:12:11 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8009' '8010')
00:15:16.139     10:12:11 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:15:16.139     10:12:11 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:15:16.139     10:12:11 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:15:16.139     10:12:11 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:15:16.139     10:12:11 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:15:16.139     10:12:11 sma.sma_discovery -- sma/discovery.sh@44 -- # echo ,
00:15:16.139     10:12:11 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:15:16.139     10:12:11 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:15:16.139     10:12:11 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:15:16.139     10:12:11 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:15:16.139     10:12:11 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:15:16.139     10:12:11 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:15:16.397  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:16.397  I0000 00:00:1732093931.468230 1815846 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:16.397  I0000 00:00:1732093931.469998 1815846 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:18.987  {}
00:15:18.987    10:12:13 sma.sma_discovery -- sma/discovery.sh@204 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:18.987    10:12:13 sma.sma_discovery -- sma/discovery.sh@204 -- # jq -r '. | length'
00:15:18.987   10:12:14 sma.sma_discovery -- sma/discovery.sh@204 -- # [[ 2 -eq 2 ]]
00:15:18.987   10:12:14 sma.sma_discovery -- sma/discovery.sh@206 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:18.987   10:12:14 sma.sma_discovery -- sma/discovery.sh@206 -- # jq -r '.[].trid.trsvcid'
00:15:18.987   10:12:14 sma.sma_discovery -- sma/discovery.sh@206 -- # grep 8009
00:15:19.245  8009
00:15:19.245   10:12:14 sma.sma_discovery -- sma/discovery.sh@207 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:19.245   10:12:14 sma.sma_discovery -- sma/discovery.sh@207 -- # jq -r '.[].trid.trsvcid'
00:15:19.245   10:12:14 sma.sma_discovery -- sma/discovery.sh@207 -- # grep 8010
00:15:19.503  8010
00:15:19.503    10:12:14 sma.sma_discovery -- sma/discovery.sh@210 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:19.503    10:12:14 sma.sma_discovery -- sma/discovery.sh@210 -- # jq -r '.[].namespaces | length'
00:15:19.762   10:12:14 sma.sma_discovery -- sma/discovery.sh@210 -- # [[ 1 -eq 1 ]]
00:15:19.762    10:12:14 sma.sma_discovery -- sma/discovery.sh@211 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:19.762    10:12:14 sma.sma_discovery -- sma/discovery.sh@211 -- # jq -r '.[].namespaces[0].uuid'
00:15:20.020   10:12:15 sma.sma_discovery -- sma/discovery.sh@211 -- # [[ 5794b457-f196-4b46-99f5-512c57779f1c == \5\7\9\4\b\4\5\7\-\f\1\9\6\-\4\b\4\6\-\9\9\f\5\-\5\1\2\c\5\7\7\7\9\f\1\c ]]
00:15:20.020   10:12:15 sma.sma_discovery -- sma/discovery.sh@214 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 71f1e5ce-2660-4c8e-92ed-0737ef8fdd0e 8010
00:15:20.020   10:12:15 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:15:20.020   10:12:15 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:15:20.020   10:12:15 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:20.020    10:12:15 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 71f1e5ce-2660-4c8e-92ed-0737ef8fdd0e 8010
00:15:20.020    10:12:15 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=71f1e5ce-2660-4c8e-92ed-0737ef8fdd0e
00:15:20.020    10:12:15 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:15:20.020    10:12:15 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:15:20.020     10:12:15 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 71f1e5ce-2660-4c8e-92ed-0737ef8fdd0e
00:15:20.020     10:12:15 sma.sma_discovery -- sma/common.sh@20 -- # python
00:15:20.278     10:12:15 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8010
00:15:20.278     10:12:15 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8010')
00:15:20.278     10:12:15 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:15:20.278     10:12:15 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:15:20.278     10:12:15 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:15:20.278     10:12:15 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:15:20.278     10:12:15 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 ))
00:15:20.278     10:12:15 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:15:20.278     10:12:15 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:15:20.278  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:20.278  I0000 00:00:1732093935.392305 1816311 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:20.278  I0000 00:00:1732093935.394059 1816311 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:20.537  {}
00:15:20.537    10:12:15 sma.sma_discovery -- sma/discovery.sh@217 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:20.537    10:12:15 sma.sma_discovery -- sma/discovery.sh@217 -- # jq -r '. | length'
00:15:20.795   10:12:15 sma.sma_discovery -- sma/discovery.sh@217 -- # [[ 2 -eq 2 ]]
00:15:20.795    10:12:15 sma.sma_discovery -- sma/discovery.sh@218 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:20.795    10:12:15 sma.sma_discovery -- sma/discovery.sh@218 -- # jq -r '.[].namespaces | length'
00:15:21.053   10:12:15 sma.sma_discovery -- sma/discovery.sh@218 -- # [[ 2 -eq 2 ]]
00:15:21.053   10:12:15 sma.sma_discovery -- sma/discovery.sh@219 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:21.053   10:12:15 sma.sma_discovery -- sma/discovery.sh@219 -- # jq -r '.[].namespaces[].uuid'
00:15:21.053   10:12:15 sma.sma_discovery -- sma/discovery.sh@219 -- # grep 5794b457-f196-4b46-99f5-512c57779f1c
00:15:21.311  5794b457-f196-4b46-99f5-512c57779f1c
00:15:21.311   10:12:16 sma.sma_discovery -- sma/discovery.sh@220 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:21.311   10:12:16 sma.sma_discovery -- sma/discovery.sh@220 -- # jq -r '.[].namespaces[].uuid'
00:15:21.311   10:12:16 sma.sma_discovery -- sma/discovery.sh@220 -- # grep 71f1e5ce-2660-4c8e-92ed-0737ef8fdd0e
00:15:21.569  71f1e5ce-2660-4c8e-92ed-0737ef8fdd0e
00:15:21.569   10:12:16 sma.sma_discovery -- sma/discovery.sh@223 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 5794b457-f196-4b46-99f5-512c57779f1c
00:15:21.569   10:12:16 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:21.569    10:12:16 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 5794b457-f196-4b46-99f5-512c57779f1c
00:15:21.569    10:12:16 sma.sma_discovery -- sma/common.sh@20 -- # python
00:15:21.827  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:21.827  I0000 00:00:1732093936.824394 1816604 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:21.827  I0000 00:00:1732093936.826206 1816604 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:21.827  {}
00:15:21.827    10:12:16 sma.sma_discovery -- sma/discovery.sh@227 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:21.827    10:12:16 sma.sma_discovery -- sma/discovery.sh@227 -- # jq -r '. | length'
00:15:22.085   10:12:17 sma.sma_discovery -- sma/discovery.sh@227 -- # [[ 1 -eq 1 ]]
00:15:22.085   10:12:17 sma.sma_discovery -- sma/discovery.sh@228 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:22.085   10:12:17 sma.sma_discovery -- sma/discovery.sh@228 -- # jq -r '.[].trid.trsvcid'
00:15:22.085   10:12:17 sma.sma_discovery -- sma/discovery.sh@228 -- # grep 8010
00:15:22.343  8010
00:15:22.344    10:12:17 sma.sma_discovery -- sma/discovery.sh@230 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:22.344    10:12:17 sma.sma_discovery -- sma/discovery.sh@230 -- # jq -r '.[].namespaces | length'
00:15:22.602   10:12:17 sma.sma_discovery -- sma/discovery.sh@230 -- # [[ 1 -eq 1 ]]
00:15:22.602    10:12:17 sma.sma_discovery -- sma/discovery.sh@231 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:22.602    10:12:17 sma.sma_discovery -- sma/discovery.sh@231 -- # jq -r '.[].namespaces[0].uuid'
00:15:23.172   10:12:17 sma.sma_discovery -- sma/discovery.sh@231 -- # [[ 71f1e5ce-2660-4c8e-92ed-0737ef8fdd0e == \7\1\f\1\e\5\c\e\-\2\6\6\0\-\4\c\8\e\-\9\2\e\d\-\0\7\3\7\e\f\8\f\d\d\0\e ]]
00:15:23.172   10:12:17 sma.sma_discovery -- sma/discovery.sh@234 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 71f1e5ce-2660-4c8e-92ed-0737ef8fdd0e
00:15:23.172   10:12:17 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:23.172    10:12:17 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 71f1e5ce-2660-4c8e-92ed-0737ef8fdd0e
00:15:23.172    10:12:17 sma.sma_discovery -- sma/common.sh@20 -- # python
00:15:23.172  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:23.172  I0000 00:00:1732093938.256570 1816779 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:23.172  I0000 00:00:1732093938.258391 1816779 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:23.431  {}
00:15:23.431    10:12:18 sma.sma_discovery -- sma/discovery.sh@237 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:23.431    10:12:18 sma.sma_discovery -- sma/discovery.sh@237 -- # jq -r '. | length'
00:15:23.689   10:12:18 sma.sma_discovery -- sma/discovery.sh@237 -- # [[ 0 -eq 0 ]]
00:15:23.689    10:12:18 sma.sma_discovery -- sma/discovery.sh@238 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:23.689    10:12:18 sma.sma_discovery -- sma/discovery.sh@238 -- # jq -r '.[].namespaces | length'
00:15:23.947   10:12:18 sma.sma_discovery -- sma/discovery.sh@238 -- # [[ 0 -eq 0 ]]
00:15:23.947    10:12:18 sma.sma_discovery -- sma/discovery.sh@241 -- # uuidgen
00:15:23.947   10:12:18 sma.sma_discovery -- sma/discovery.sh@241 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 1c82d773-98e7-4649-a260-0b46870fa5da 8009
00:15:23.947   10:12:18 sma.sma_discovery -- common/autotest_common.sh@652 -- # local es=0
00:15:23.947   10:12:18 sma.sma_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 1c82d773-98e7-4649-a260-0b46870fa5da 8009
00:15:23.947   10:12:18 sma.sma_discovery -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:15:23.947   10:12:18 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:23.947    10:12:18 sma.sma_discovery -- common/autotest_common.sh@644 -- # type -t attach_volume
00:15:23.947   10:12:18 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:23.947   10:12:18 sma.sma_discovery -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 1c82d773-98e7-4649-a260-0b46870fa5da 8009
00:15:23.947   10:12:18 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:15:23.947   10:12:18 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:15:23.947   10:12:18 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:23.947    10:12:18 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 1c82d773-98e7-4649-a260-0b46870fa5da 8009
00:15:23.947    10:12:18 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=1c82d773-98e7-4649-a260-0b46870fa5da
00:15:23.947    10:12:18 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:15:23.947    10:12:18 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:15:23.947     10:12:18 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 1c82d773-98e7-4649-a260-0b46870fa5da
00:15:23.947     10:12:18 sma.sma_discovery -- sma/common.sh@20 -- # python
00:15:23.947     10:12:18 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8009
00:15:23.947     10:12:18 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8009')
00:15:23.947     10:12:18 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:15:23.947     10:12:18 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:15:23.947     10:12:18 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:15:23.947     10:12:18 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:15:23.947     10:12:18 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 ))
00:15:23.947     10:12:18 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:15:23.947     10:12:18 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:15:24.219  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:24.219  I0000 00:00:1732093939.181924 1816944 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:24.219  I0000 00:00:1732093939.183623 1816944 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:25.602  [2024-11-20 10:12:20.287025] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 1c82d773-98e7-4649-a260-0b46870fa5da
00:15:25.602  [2024-11-20 10:12:20.387271] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 1c82d773-98e7-4649-a260-0b46870fa5da
00:15:25.602  [2024-11-20 10:12:20.487519] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 1c82d773-98e7-4649-a260-0b46870fa5da
00:15:25.602  [2024-11-20 10:12:20.587768] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 1c82d773-98e7-4649-a260-0b46870fa5da
00:15:25.602  [2024-11-20 10:12:20.688014] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 1c82d773-98e7-4649-a260-0b46870fa5da
00:15:25.863  [2024-11-20 10:12:20.788262] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 1c82d773-98e7-4649-a260-0b46870fa5da
00:15:25.863  [2024-11-20 10:12:20.888517] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 1c82d773-98e7-4649-a260-0b46870fa5da
00:15:26.123  [2024-11-20 10:12:20.988766] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 1c82d773-98e7-4649-a260-0b46870fa5da
00:15:26.123  [2024-11-20 10:12:21.089012] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 1c82d773-98e7-4649-a260-0b46870fa5da
00:15:26.124  [2024-11-20 10:12:21.189261] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 1c82d773-98e7-4649-a260-0b46870fa5da
00:15:26.383  [2024-11-20 10:12:21.289514] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 1c82d773-98e7-4649-a260-0b46870fa5da
00:15:26.383  [2024-11-20 10:12:21.289565] bdev.c:8401:_bdev_open_async: *ERROR*: Timed out while waiting for bdev '1c82d773-98e7-4649-a260-0b46870fa5da' to appear
00:15:26.383  Traceback (most recent call last):
00:15:26.383    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:15:26.383      main(sys.argv[1:])
00:15:26.383    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:15:26.383      result = client.call(request['method'], request.get('params', {}))
00:15:26.383               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:15:26.383    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:15:26.383      response = func(request=json_format.ParseDict(params, input()))
00:15:26.383                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:15:26.383    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:15:26.383      return _end_unary_response_blocking(state, call, False, None)
00:15:26.383             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:15:26.383    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:15:26.383      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:15:26.383      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:15:26.383  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:15:26.383  	status = StatusCode.NOT_FOUND
00:15:26.383  	details = "Volume could not be found"
00:15:26.383  	debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:8080 {created_time:"2024-11-20T10:12:21.306787874+01:00", grpc_status:5, grpc_message:"Volume could not be found"}"
00:15:26.383  >
00:15:26.383   10:12:21 sma.sma_discovery -- common/autotest_common.sh@655 -- # es=1
00:15:26.383   10:12:21 sma.sma_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:15:26.383   10:12:21 sma.sma_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:15:26.383   10:12:21 sma.sma_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:15:26.383    10:12:21 sma.sma_discovery -- sma/discovery.sh@242 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:26.383    10:12:21 sma.sma_discovery -- sma/discovery.sh@242 -- # jq -r '. | length'
00:15:26.642   10:12:21 sma.sma_discovery -- sma/discovery.sh@242 -- # [[ 0 -eq 0 ]]
00:15:26.642    10:12:21 sma.sma_discovery -- sma/discovery.sh@243 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:26.642    10:12:21 sma.sma_discovery -- sma/discovery.sh@243 -- # jq -r '.[].namespaces | length'
00:15:26.901   10:12:21 sma.sma_discovery -- sma/discovery.sh@243 -- # [[ 0 -eq 0 ]]
00:15:26.901   10:12:21 sma.sma_discovery -- sma/discovery.sh@246 -- # volumes=($t1uuid $t2uuid)
00:15:26.901   10:12:21 sma.sma_discovery -- sma/discovery.sh@247 -- # for volume_id in "${volumes[@]}"
00:15:26.901   10:12:21 sma.sma_discovery -- sma/discovery.sh@248 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 5794b457-f196-4b46-99f5-512c57779f1c 8009 8010
00:15:26.901   10:12:21 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:15:26.901   10:12:21 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:15:26.901   10:12:21 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:26.901    10:12:21 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 5794b457-f196-4b46-99f5-512c57779f1c 8009 8010
00:15:26.901    10:12:21 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=5794b457-f196-4b46-99f5-512c57779f1c
00:15:26.901    10:12:21 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:15:26.901    10:12:21 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:15:26.901     10:12:21 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 5794b457-f196-4b46-99f5-512c57779f1c
00:15:26.901     10:12:21 sma.sma_discovery -- sma/common.sh@20 -- # python
00:15:26.901     10:12:21 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8009 8010
00:15:26.901     10:12:21 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8009' '8010')
00:15:26.901     10:12:21 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:15:26.901     10:12:21 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:15:26.901     10:12:21 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:15:26.901     10:12:21 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:15:26.901     10:12:21 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:15:26.901     10:12:21 sma.sma_discovery -- sma/discovery.sh@44 -- # echo ,
00:15:26.901     10:12:21 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:15:26.901     10:12:21 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:15:26.901     10:12:21 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:15:26.901     10:12:21 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:15:26.901     10:12:21 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:15:26.901     10:12:21 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:15:27.162  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:27.162  I0000 00:00:1732093942.212762 1817273 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:27.162  I0000 00:00:1732093942.214641 1817273 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:29.696  {}
00:15:29.696   10:12:24 sma.sma_discovery -- sma/discovery.sh@247 -- # for volume_id in "${volumes[@]}"
00:15:29.696   10:12:24 sma.sma_discovery -- sma/discovery.sh@248 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 71f1e5ce-2660-4c8e-92ed-0737ef8fdd0e 8009 8010
00:15:29.696   10:12:24 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:15:29.696   10:12:24 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:15:29.696   10:12:24 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:29.696    10:12:24 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 71f1e5ce-2660-4c8e-92ed-0737ef8fdd0e 8009 8010
00:15:29.696    10:12:24 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=71f1e5ce-2660-4c8e-92ed-0737ef8fdd0e
00:15:29.696    10:12:24 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:15:29.696    10:12:24 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:15:29.696     10:12:24 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 71f1e5ce-2660-4c8e-92ed-0737ef8fdd0e
00:15:29.696     10:12:24 sma.sma_discovery -- sma/common.sh@20 -- # python
00:15:29.696     10:12:24 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8009 8010
00:15:29.696     10:12:24 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8009' '8010')
00:15:29.696     10:12:24 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:15:29.696     10:12:24 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:15:29.696     10:12:24 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:15:29.696     10:12:24 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:15:29.696     10:12:24 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:15:29.696     10:12:24 sma.sma_discovery -- sma/discovery.sh@44 -- # echo ,
00:15:29.696     10:12:24 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:15:29.696     10:12:24 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:15:29.696     10:12:24 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:15:29.696     10:12:24 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:15:29.696     10:12:24 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:15:29.696     10:12:24 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:15:29.696  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:29.696  I0000 00:00:1732093944.778556 1817688 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:29.696  I0000 00:00:1732093944.780463 1817688 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:29.955  {}
00:15:29.955    10:12:24 sma.sma_discovery -- sma/discovery.sh@251 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:29.955    10:12:24 sma.sma_discovery -- sma/discovery.sh@251 -- # jq -r '. | length'
00:15:30.213   10:12:25 sma.sma_discovery -- sma/discovery.sh@251 -- # [[ 2 -eq 2 ]]
00:15:30.213   10:12:25 sma.sma_discovery -- sma/discovery.sh@252 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:30.213   10:12:25 sma.sma_discovery -- sma/discovery.sh@252 -- # jq -r '.[].trid.trsvcid'
00:15:30.213   10:12:25 sma.sma_discovery -- sma/discovery.sh@252 -- # grep 8009
00:15:30.470  8009
00:15:30.470   10:12:25 sma.sma_discovery -- sma/discovery.sh@253 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:30.471   10:12:25 sma.sma_discovery -- sma/discovery.sh@253 -- # jq -r '.[].trid.trsvcid'
00:15:30.471   10:12:25 sma.sma_discovery -- sma/discovery.sh@253 -- # grep 8010
00:15:30.727  8010
00:15:30.727   10:12:25 sma.sma_discovery -- sma/discovery.sh@254 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:30.727   10:12:25 sma.sma_discovery -- sma/discovery.sh@254 -- # jq -r '.[].namespaces[].uuid'
00:15:30.727   10:12:25 sma.sma_discovery -- sma/discovery.sh@254 -- # grep 5794b457-f196-4b46-99f5-512c57779f1c
00:15:30.985  5794b457-f196-4b46-99f5-512c57779f1c
00:15:30.985   10:12:25 sma.sma_discovery -- sma/discovery.sh@255 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:30.985   10:12:25 sma.sma_discovery -- sma/discovery.sh@255 -- # jq -r '.[].namespaces[].uuid'
00:15:30.985   10:12:25 sma.sma_discovery -- sma/discovery.sh@255 -- # grep 71f1e5ce-2660-4c8e-92ed-0737ef8fdd0e
00:15:31.243  71f1e5ce-2660-4c8e-92ed-0737ef8fdd0e
00:15:31.243   10:12:26 sma.sma_discovery -- sma/discovery.sh@258 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 5794b457-f196-4b46-99f5-512c57779f1c
00:15:31.243   10:12:26 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:31.243    10:12:26 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 5794b457-f196-4b46-99f5-512c57779f1c
00:15:31.243    10:12:26 sma.sma_discovery -- sma/common.sh@20 -- # python
00:15:31.502  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:31.502  I0000 00:00:1732093946.476522 1817875 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:31.502  I0000 00:00:1732093946.478363 1817875 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:31.502  {}
00:15:31.502    10:12:26 sma.sma_discovery -- sma/discovery.sh@260 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:31.502    10:12:26 sma.sma_discovery -- sma/discovery.sh@260 -- # jq -r '. | length'
00:15:31.761   10:12:26 sma.sma_discovery -- sma/discovery.sh@260 -- # [[ 2 -eq 2 ]]
00:15:31.761   10:12:26 sma.sma_discovery -- sma/discovery.sh@261 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:31.761   10:12:26 sma.sma_discovery -- sma/discovery.sh@261 -- # jq -r '.[].trid.trsvcid'
00:15:31.761   10:12:26 sma.sma_discovery -- sma/discovery.sh@261 -- # grep 8009
00:15:32.019  8009
00:15:32.019   10:12:27 sma.sma_discovery -- sma/discovery.sh@262 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:32.019   10:12:27 sma.sma_discovery -- sma/discovery.sh@262 -- # jq -r '.[].trid.trsvcid'
00:15:32.019   10:12:27 sma.sma_discovery -- sma/discovery.sh@262 -- # grep 8010
00:15:32.277  8010
00:15:32.277   10:12:27 sma.sma_discovery -- sma/discovery.sh@265 -- # NOT delete_device nvmf-tcp:nqn.2016-06.io.spdk:local0
00:15:32.277   10:12:27 sma.sma_discovery -- common/autotest_common.sh@652 -- # local es=0
00:15:32.277   10:12:27 sma.sma_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg delete_device nvmf-tcp:nqn.2016-06.io.spdk:local0
00:15:32.277   10:12:27 sma.sma_discovery -- common/autotest_common.sh@640 -- # local arg=delete_device
00:15:32.277   10:12:27 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:32.277    10:12:27 sma.sma_discovery -- common/autotest_common.sh@644 -- # type -t delete_device
00:15:32.277   10:12:27 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:32.277   10:12:27 sma.sma_discovery -- common/autotest_common.sh@655 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:local0
00:15:32.277   10:12:27 sma.sma_discovery -- sma/discovery.sh@95 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:32.537  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:32.537  I0000 00:00:1732093947.617128 1818038 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:32.537  I0000 00:00:1732093947.618945 1818038 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:32.537  Traceback (most recent call last):
00:15:32.537    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:15:32.537      main(sys.argv[1:])
00:15:32.537    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:15:32.537      result = client.call(request['method'], request.get('params', {}))
00:15:32.537               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:15:32.537    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:15:32.537      response = func(request=json_format.ParseDict(params, input()))
00:15:32.537                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:15:32.537    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:15:32.537      return _end_unary_response_blocking(state, call, False, None)
00:15:32.537             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:15:32.537    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:15:32.537      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:15:32.537      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:15:32.537  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:15:32.537  	status = StatusCode.FAILED_PRECONDITION
00:15:32.537  	details = "Device has attached volumes"
00:15:32.537  	debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:8080 {created_time:"2024-11-20T10:12:27.621527109+01:00", grpc_status:9, grpc_message:"Device has attached volumes"}"
00:15:32.537  >
00:15:32.537   10:12:27 sma.sma_discovery -- common/autotest_common.sh@655 -- # es=1
00:15:32.537   10:12:27 sma.sma_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:15:32.537   10:12:27 sma.sma_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:15:32.537   10:12:27 sma.sma_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:15:32.537    10:12:27 sma.sma_discovery -- sma/discovery.sh@267 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:32.537    10:12:27 sma.sma_discovery -- sma/discovery.sh@267 -- # jq -r '. | length'
00:15:33.109   10:12:27 sma.sma_discovery -- sma/discovery.sh@267 -- # [[ 2 -eq 2 ]]
00:15:33.109   10:12:27 sma.sma_discovery -- sma/discovery.sh@268 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:33.109   10:12:27 sma.sma_discovery -- sma/discovery.sh@268 -- # jq -r '.[].trid.trsvcid'
00:15:33.109   10:12:27 sma.sma_discovery -- sma/discovery.sh@268 -- # grep 8009
00:15:33.109  8009
00:15:33.109   10:12:28 sma.sma_discovery -- sma/discovery.sh@269 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:33.109   10:12:28 sma.sma_discovery -- sma/discovery.sh@269 -- # jq -r '.[].trid.trsvcid'
00:15:33.109   10:12:28 sma.sma_discovery -- sma/discovery.sh@269 -- # grep 8010
00:15:33.367  8010
00:15:33.367   10:12:28 sma.sma_discovery -- sma/discovery.sh@272 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 71f1e5ce-2660-4c8e-92ed-0737ef8fdd0e
00:15:33.367   10:12:28 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:33.367    10:12:28 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 71f1e5ce-2660-4c8e-92ed-0737ef8fdd0e
00:15:33.367    10:12:28 sma.sma_discovery -- sma/common.sh@20 -- # python
00:15:33.625  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:33.625  I0000 00:00:1732093948.738461 1818204 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:33.625  I0000 00:00:1732093948.740189 1818204 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:33.883  {}
00:15:33.883   10:12:28 sma.sma_discovery -- sma/discovery.sh@273 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:local0
00:15:33.883   10:12:28 sma.sma_discovery -- sma/discovery.sh@95 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:34.141  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:34.141  I0000 00:00:1732093949.048699 1818351 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:34.141  I0000 00:00:1732093949.050425 1818351 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:34.141  {}
00:15:34.141    10:12:29 sma.sma_discovery -- sma/discovery.sh@275 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:34.141    10:12:29 sma.sma_discovery -- sma/discovery.sh@275 -- # jq -r '. | length'
00:15:34.399   10:12:29 sma.sma_discovery -- sma/discovery.sh@275 -- # [[ 0 -eq 0 ]]
00:15:34.399   10:12:29 sma.sma_discovery -- sma/discovery.sh@276 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:34.399   10:12:29 sma.sma_discovery -- common/autotest_common.sh@652 -- # local es=0
00:15:34.399   10:12:29 sma.sma_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:34.399   10:12:29 sma.sma_discovery -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:15:34.399   10:12:29 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:34.399    10:12:29 sma.sma_discovery -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:15:34.399   10:12:29 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:34.399    10:12:29 sma.sma_discovery -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:15:34.399   10:12:29 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:34.399   10:12:29 sma.sma_discovery -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:15:34.399   10:12:29 sma.sma_discovery -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py ]]
00:15:34.399   10:12:29 sma.sma_discovery -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:34.657  [2024-11-20 10:12:29.597884] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:local0' does not exist
00:15:34.657  request:
00:15:34.657  {
00:15:34.657    "nqn": "nqn.2016-06.io.spdk:local0",
00:15:34.657    "method": "nvmf_get_subsystems",
00:15:34.657    "req_id": 1
00:15:34.657  }
00:15:34.657  Got JSON-RPC error response
00:15:34.657  response:
00:15:34.657  {
00:15:34.657    "code": -19,
00:15:34.657    "message": "No such device"
00:15:34.657  }
00:15:34.657   10:12:29 sma.sma_discovery -- common/autotest_common.sh@655 -- # es=1
00:15:34.657   10:12:29 sma.sma_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:15:34.657   10:12:29 sma.sma_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:15:34.657   10:12:29 sma.sma_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:15:34.657    10:12:29 sma.sma_discovery -- sma/discovery.sh@279 -- # create_device nqn.2016-06.io.spdk:local0 5794b457-f196-4b46-99f5-512c57779f1c 8009
00:15:34.657    10:12:29 sma.sma_discovery -- sma/discovery.sh@279 -- # jq -r .handle
00:15:34.657    10:12:29 sma.sma_discovery -- sma/discovery.sh@69 -- # local nqn=nqn.2016-06.io.spdk:local0
00:15:34.657    10:12:29 sma.sma_discovery -- sma/discovery.sh@70 -- # local volume_id=5794b457-f196-4b46-99f5-512c57779f1c
00:15:34.657    10:12:29 sma.sma_discovery -- sma/discovery.sh@71 -- # local volume=
00:15:34.657    10:12:29 sma.sma_discovery -- sma/discovery.sh@73 -- # shift
00:15:34.657    10:12:29 sma.sma_discovery -- sma/discovery.sh@74 -- # [[ -n 5794b457-f196-4b46-99f5-512c57779f1c ]]
00:15:34.657     10:12:29 sma.sma_discovery -- sma/discovery.sh@75 -- # format_volume 5794b457-f196-4b46-99f5-512c57779f1c 8009
00:15:34.657     10:12:29 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=5794b457-f196-4b46-99f5-512c57779f1c
00:15:34.657     10:12:29 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:15:34.657     10:12:29 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:15:34.657      10:12:29 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 5794b457-f196-4b46-99f5-512c57779f1c
00:15:34.657      10:12:29 sma.sma_discovery -- sma/common.sh@20 -- # python
00:15:34.657      10:12:29 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8009
00:15:34.657      10:12:29 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8009')
00:15:34.657      10:12:29 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:15:34.657      10:12:29 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:15:34.657      10:12:29 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:15:34.657      10:12:29 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:15:34.657      10:12:29 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 ))
00:15:34.657      10:12:29 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:15:34.657      10:12:29 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:15:34.657    10:12:29 sma.sma_discovery -- sma/discovery.sh@75 -- # volume='"volume": {
00:15:34.657  "volume_id": "V5S0V/GWS0aZ9VEsV3efHA==",
00:15:34.657  "nvmf": {
00:15:34.657  "hostnqn": "nqn.2016-06.io.spdk:host0",
00:15:34.657  "discovery": {
00:15:34.657  "discovery_endpoints": [
00:15:34.657  {
00:15:34.657  "trtype": "tcp",
00:15:34.657  "traddr": "127.0.0.1",
00:15:34.657  "trsvcid": "8009"
00:15:34.657  }
00:15:34.657  ]
00:15:34.657  }
00:15:34.657  }
00:15:34.657  },'
00:15:34.657    10:12:29 sma.sma_discovery -- sma/discovery.sh@78 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:34.917  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:34.917  I0000 00:00:1732093949.921926 1818400 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:34.917  I0000 00:00:1732093949.923843 1818400 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:36.291  [2024-11-20 10:12:31.050666] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 ***
00:15:36.291   10:12:31 sma.sma_discovery -- sma/discovery.sh@279 -- # device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:15:36.291    10:12:31 sma.sma_discovery -- sma/discovery.sh@282 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:36.291    10:12:31 sma.sma_discovery -- sma/discovery.sh@282 -- # jq -r '. | length'
00:15:36.291   10:12:31 sma.sma_discovery -- sma/discovery.sh@282 -- # [[ 1 -eq 1 ]]
00:15:36.291   10:12:31 sma.sma_discovery -- sma/discovery.sh@283 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:36.291   10:12:31 sma.sma_discovery -- sma/discovery.sh@283 -- # jq -r '.[].trid.trsvcid'
00:15:36.292   10:12:31 sma.sma_discovery -- sma/discovery.sh@283 -- # grep 8009
00:15:36.549  8009
00:15:36.549    10:12:31 sma.sma_discovery -- sma/discovery.sh@284 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:36.549    10:12:31 sma.sma_discovery -- sma/discovery.sh@284 -- # jq -r '.[].namespaces | length'
00:15:36.806   10:12:31 sma.sma_discovery -- sma/discovery.sh@284 -- # [[ 1 -eq 1 ]]
00:15:36.806    10:12:31 sma.sma_discovery -- sma/discovery.sh@285 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:36.806    10:12:31 sma.sma_discovery -- sma/discovery.sh@285 -- # jq -r '.[].namespaces[0].uuid'
00:15:37.375   10:12:32 sma.sma_discovery -- sma/discovery.sh@285 -- # [[ 5794b457-f196-4b46-99f5-512c57779f1c == \5\7\9\4\b\4\5\7\-\f\1\9\6\-\4\b\4\6\-\9\9\f\5\-\5\1\2\c\5\7\7\7\9\f\1\c ]]
00:15:37.375   10:12:32 sma.sma_discovery -- sma/discovery.sh@288 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 5794b457-f196-4b46-99f5-512c57779f1c
00:15:37.375   10:12:32 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:37.375    10:12:32 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 5794b457-f196-4b46-99f5-512c57779f1c
00:15:37.375    10:12:32 sma.sma_discovery -- sma/common.sh@20 -- # python
00:15:37.375  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:37.375  I0000 00:00:1732093952.455326 1818832 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:37.375  I0000 00:00:1732093952.457124 1818832 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:37.632  {}
00:15:37.633    10:12:32 sma.sma_discovery -- sma/discovery.sh@290 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:37.633    10:12:32 sma.sma_discovery -- sma/discovery.sh@290 -- # jq -r '. | length'
00:15:37.890   10:12:32 sma.sma_discovery -- sma/discovery.sh@290 -- # [[ 0 -eq 0 ]]
00:15:37.890    10:12:32 sma.sma_discovery -- sma/discovery.sh@291 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:37.890    10:12:32 sma.sma_discovery -- sma/discovery.sh@291 -- # jq -r '.[].namespaces | length'
00:15:38.148   10:12:33 sma.sma_discovery -- sma/discovery.sh@291 -- # [[ 0 -eq 0 ]]
00:15:38.148   10:12:33 sma.sma_discovery -- sma/discovery.sh@294 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 71f1e5ce-2660-4c8e-92ed-0737ef8fdd0e 8010 8011
00:15:38.148   10:12:33 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:15:38.148   10:12:33 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:15:38.148   10:12:33 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:38.148    10:12:33 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 71f1e5ce-2660-4c8e-92ed-0737ef8fdd0e 8010 8011
00:15:38.148    10:12:33 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=71f1e5ce-2660-4c8e-92ed-0737ef8fdd0e
00:15:38.148    10:12:33 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:15:38.148    10:12:33 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:15:38.148     10:12:33 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 71f1e5ce-2660-4c8e-92ed-0737ef8fdd0e
00:15:38.148     10:12:33 sma.sma_discovery -- sma/common.sh@20 -- # python
00:15:38.148     10:12:33 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8010 8011
00:15:38.148     10:12:33 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8010' '8011')
00:15:38.148     10:12:33 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:15:38.148     10:12:33 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:15:38.148     10:12:33 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:15:38.148     10:12:33 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:15:38.148     10:12:33 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:15:38.148     10:12:33 sma.sma_discovery -- sma/discovery.sh@44 -- # echo ,
00:15:38.148     10:12:33 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:15:38.148     10:12:33 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:15:38.148     10:12:33 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:15:38.148     10:12:33 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:15:38.148     10:12:33 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:15:38.148     10:12:33 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:15:38.408  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:38.408  I0000 00:00:1732093953.350462 1818868 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:38.408  I0000 00:00:1732093953.352283 1818868 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:39.788  {}
00:15:39.788    10:12:34 sma.sma_discovery -- sma/discovery.sh@297 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:39.788    10:12:34 sma.sma_discovery -- sma/discovery.sh@297 -- # jq -r '. | length'
00:15:39.788   10:12:34 sma.sma_discovery -- sma/discovery.sh@297 -- # [[ 1 -eq 1 ]]
00:15:39.788    10:12:34 sma.sma_discovery -- sma/discovery.sh@298 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:39.788    10:12:34 sma.sma_discovery -- sma/discovery.sh@298 -- # jq -r '.[].namespaces | length'
00:15:40.046   10:12:35 sma.sma_discovery -- sma/discovery.sh@298 -- # [[ 1 -eq 1 ]]
00:15:40.046    10:12:35 sma.sma_discovery -- sma/discovery.sh@299 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:40.046    10:12:35 sma.sma_discovery -- sma/discovery.sh@299 -- # jq -r '.[].namespaces[0].uuid'
00:15:40.304   10:12:35 sma.sma_discovery -- sma/discovery.sh@299 -- # [[ 71f1e5ce-2660-4c8e-92ed-0737ef8fdd0e == \7\1\f\1\e\5\c\e\-\2\6\6\0\-\4\c\8\e\-\9\2\e\d\-\0\7\3\7\e\f\8\f\d\d\0\e ]]
00:15:40.304   10:12:35 sma.sma_discovery -- sma/discovery.sh@302 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 257c4508-e761-414f-80e2-0f0aab2c2e67 8011
00:15:40.304   10:12:35 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:15:40.304   10:12:35 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:15:40.304   10:12:35 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:40.304    10:12:35 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 257c4508-e761-414f-80e2-0f0aab2c2e67 8011
00:15:40.304    10:12:35 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=257c4508-e761-414f-80e2-0f0aab2c2e67
00:15:40.304    10:12:35 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:15:40.304    10:12:35 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:15:40.304     10:12:35 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 257c4508-e761-414f-80e2-0f0aab2c2e67
00:15:40.304     10:12:35 sma.sma_discovery -- sma/common.sh@20 -- # python
00:15:40.304     10:12:35 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8011
00:15:40.304     10:12:35 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8011')
00:15:40.304     10:12:35 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:15:40.304     10:12:35 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:15:40.304     10:12:35 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:15:40.304     10:12:35 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:15:40.304     10:12:35 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 ))
00:15:40.304     10:12:35 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:15:40.304     10:12:35 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:15:40.563  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:40.563  I0000 00:00:1732093955.660898 1819176 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:40.563  I0000 00:00:1732093955.662572 1819176 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:40.821  {}
00:15:40.821    10:12:35 sma.sma_discovery -- sma/discovery.sh@305 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:40.821    10:12:35 sma.sma_discovery -- sma/discovery.sh@305 -- # jq -r '. | length'
00:15:41.079   10:12:35 sma.sma_discovery -- sma/discovery.sh@305 -- # [[ 1 -eq 1 ]]
00:15:41.079    10:12:35 sma.sma_discovery -- sma/discovery.sh@306 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:41.079    10:12:35 sma.sma_discovery -- sma/discovery.sh@306 -- # jq -r '.[].namespaces | length'
00:15:41.336   10:12:36 sma.sma_discovery -- sma/discovery.sh@306 -- # [[ 2 -eq 2 ]]
00:15:41.336   10:12:36 sma.sma_discovery -- sma/discovery.sh@307 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:41.336   10:12:36 sma.sma_discovery -- sma/discovery.sh@307 -- # jq -r '.[].namespaces[].uuid'
00:15:41.336   10:12:36 sma.sma_discovery -- sma/discovery.sh@307 -- # grep 71f1e5ce-2660-4c8e-92ed-0737ef8fdd0e
00:15:41.595  71f1e5ce-2660-4c8e-92ed-0737ef8fdd0e
00:15:41.595   10:12:36 sma.sma_discovery -- sma/discovery.sh@308 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:41.595   10:12:36 sma.sma_discovery -- sma/discovery.sh@308 -- # jq -r '.[].namespaces[].uuid'
00:15:41.595   10:12:36 sma.sma_discovery -- sma/discovery.sh@308 -- # grep 257c4508-e761-414f-80e2-0f0aab2c2e67
00:15:41.853  257c4508-e761-414f-80e2-0f0aab2c2e67
00:15:41.853   10:12:36 sma.sma_discovery -- sma/discovery.sh@311 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 5794b457-f196-4b46-99f5-512c57779f1c
00:15:41.853   10:12:36 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:41.853    10:12:36 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 5794b457-f196-4b46-99f5-512c57779f1c
00:15:41.853    10:12:36 sma.sma_discovery -- sma/common.sh@20 -- # python
00:15:42.123  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:42.123  I0000 00:00:1732093957.074178 1819471 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:42.123  I0000 00:00:1732093957.075910 1819471 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:42.123  [2024-11-20 10:12:37.079954] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 5794b457-f196-4b46-99f5-512c57779f1c
00:15:42.123  {}
00:15:42.123   10:12:37 sma.sma_discovery -- sma/discovery.sh@312 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 71f1e5ce-2660-4c8e-92ed-0737ef8fdd0e
00:15:42.123   10:12:37 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:42.123    10:12:37 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 71f1e5ce-2660-4c8e-92ed-0737ef8fdd0e
00:15:42.123    10:12:37 sma.sma_discovery -- sma/common.sh@20 -- # python
00:15:42.380  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:42.380  I0000 00:00:1732093957.379581 1819503 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:42.380  I0000 00:00:1732093957.381329 1819503 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:42.380  {}
00:15:42.380   10:12:37 sma.sma_discovery -- sma/discovery.sh@313 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 257c4508-e761-414f-80e2-0f0aab2c2e67
00:15:42.380   10:12:37 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:42.380    10:12:37 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 257c4508-e761-414f-80e2-0f0aab2c2e67
00:15:42.380    10:12:37 sma.sma_discovery -- sma/common.sh@20 -- # python
00:15:42.638  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:42.638  I0000 00:00:1732093957.704967 1819526 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:42.638  I0000 00:00:1732093957.706696 1819526 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:42.638  {}
00:15:42.895   10:12:37 sma.sma_discovery -- sma/discovery.sh@314 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:local0
00:15:42.895   10:12:37 sma.sma_discovery -- sma/discovery.sh@95 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:42.895  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:42.895  I0000 00:00:1732093957.998236 1819667 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:42.895  I0000 00:00:1732093957.999925 1819667 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:43.152  {}
00:15:43.152    10:12:38 sma.sma_discovery -- sma/discovery.sh@315 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:43.152    10:12:38 sma.sma_discovery -- sma/discovery.sh@315 -- # jq -r '. | length'
00:15:43.409   10:12:38 sma.sma_discovery -- sma/discovery.sh@315 -- # [[ 0 -eq 0 ]]
00:15:43.409    10:12:38 sma.sma_discovery -- sma/discovery.sh@317 -- # create_device nqn.2016-06.io.spdk:local0
00:15:43.409    10:12:38 sma.sma_discovery -- sma/discovery.sh@317 -- # jq -r .handle
00:15:43.409    10:12:38 sma.sma_discovery -- sma/discovery.sh@69 -- # local nqn=nqn.2016-06.io.spdk:local0
00:15:43.409    10:12:38 sma.sma_discovery -- sma/discovery.sh@70 -- # local volume_id=
00:15:43.409    10:12:38 sma.sma_discovery -- sma/discovery.sh@71 -- # local volume=
00:15:43.409    10:12:38 sma.sma_discovery -- sma/discovery.sh@73 -- # shift
00:15:43.409    10:12:38 sma.sma_discovery -- sma/discovery.sh@74 -- # [[ -n '' ]]
00:15:43.409    10:12:38 sma.sma_discovery -- sma/discovery.sh@78 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:43.666  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:43.666  I0000 00:00:1732093958.531800 1819709 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:43.666  I0000 00:00:1732093958.533577 1819709 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:43.666  [2024-11-20 10:12:38.553114] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 ***
00:15:43.666   10:12:38 sma.sma_discovery -- sma/discovery.sh@317 -- # device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:15:43.666   10:12:38 sma.sma_discovery -- sma/discovery.sh@320 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:43.666    10:12:38 sma.sma_discovery -- sma/discovery.sh@320 -- # uuid2base64 5794b457-f196-4b46-99f5-512c57779f1c
00:15:43.666    10:12:38 sma.sma_discovery -- sma/common.sh@20 -- # python
00:15:43.923  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:43.923  I0000 00:00:1732093958.861473 1819731 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:43.924  I0000 00:00:1732093958.863352 1819731 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:45.298  {}
00:15:45.298    10:12:40 sma.sma_discovery -- sma/discovery.sh@345 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:45.298    10:12:40 sma.sma_discovery -- sma/discovery.sh@345 -- # jq -r '. | length'
00:15:45.298   10:12:40 sma.sma_discovery -- sma/discovery.sh@345 -- # [[ 1 -eq 1 ]]
00:15:45.298   10:12:40 sma.sma_discovery -- sma/discovery.sh@346 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:45.298   10:12:40 sma.sma_discovery -- sma/discovery.sh@346 -- # jq -r '.[].trid.trsvcid'
00:15:45.298   10:12:40 sma.sma_discovery -- sma/discovery.sh@346 -- # grep 8009
00:15:45.555  8009
00:15:45.555    10:12:40 sma.sma_discovery -- sma/discovery.sh@347 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:45.555    10:12:40 sma.sma_discovery -- sma/discovery.sh@347 -- # jq -r '.[].namespaces | length'
00:15:45.813   10:12:40 sma.sma_discovery -- sma/discovery.sh@347 -- # [[ 1 -eq 1 ]]
00:15:45.813    10:12:40 sma.sma_discovery -- sma/discovery.sh@348 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:45.813    10:12:40 sma.sma_discovery -- sma/discovery.sh@348 -- # jq -r '.[].namespaces[0].uuid'
00:15:46.072   10:12:41 sma.sma_discovery -- sma/discovery.sh@348 -- # [[ 5794b457-f196-4b46-99f5-512c57779f1c == \5\7\9\4\b\4\5\7\-\f\1\9\6\-\4\b\4\6\-\9\9\f\5\-\5\1\2\c\5\7\7\7\9\f\1\c ]]
00:15:46.072   10:12:41 sma.sma_discovery -- sma/discovery.sh@351 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:46.072    10:12:41 sma.sma_discovery -- sma/discovery.sh@351 -- # uuid2base64 71f1e5ce-2660-4c8e-92ed-0737ef8fdd0e
00:15:46.072    10:12:41 sma.sma_discovery -- sma/common.sh@20 -- # python
00:15:46.072   10:12:41 sma.sma_discovery -- common/autotest_common.sh@652 -- # local es=0
00:15:46.072   10:12:41 sma.sma_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:46.072   10:12:41 sma.sma_discovery -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:46.072   10:12:41 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:46.072    10:12:41 sma.sma_discovery -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:46.072   10:12:41 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:46.072    10:12:41 sma.sma_discovery -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:46.072   10:12:41 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:46.072   10:12:41 sma.sma_discovery -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:46.072   10:12:41 sma.sma_discovery -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:15:46.072   10:12:41 sma.sma_discovery -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:46.332  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:46.332  I0000 00:00:1732093961.394317 1820170 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:46.332  I0000 00:00:1732093961.396125 1820170 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:47.715  Traceback (most recent call last):
00:15:47.715    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:15:47.715      main(sys.argv[1:])
00:15:47.715    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:15:47.715      result = client.call(request['method'], request.get('params', {}))
00:15:47.715               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:15:47.715    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:15:47.715      response = func(request=json_format.ParseDict(params, input()))
00:15:47.715                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:15:47.715    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:15:47.715      return _end_unary_response_blocking(state, call, False, None)
00:15:47.715             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:15:47.715    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:15:47.715      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:15:47.715      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:15:47.715  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:15:47.715  	status = StatusCode.INVALID_ARGUMENT
00:15:47.715  	details = "Unexpected subsystem NQN"
00:15:47.715  	debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:8080 {created_time:"2024-11-20T10:12:42.52524431+01:00", grpc_status:3, grpc_message:"Unexpected subsystem NQN"}"
00:15:47.715  >
00:15:47.715   10:12:42 sma.sma_discovery -- common/autotest_common.sh@655 -- # es=1
00:15:47.715   10:12:42 sma.sma_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:15:47.715   10:12:42 sma.sma_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:15:47.715   10:12:42 sma.sma_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:15:47.715    10:12:42 sma.sma_discovery -- sma/discovery.sh@377 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:47.715    10:12:42 sma.sma_discovery -- sma/discovery.sh@377 -- # jq -r '. | length'
00:15:47.992   10:12:42 sma.sma_discovery -- sma/discovery.sh@377 -- # [[ 1 -eq 1 ]]
00:15:47.992   10:12:42 sma.sma_discovery -- sma/discovery.sh@378 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:47.992   10:12:42 sma.sma_discovery -- sma/discovery.sh@378 -- # jq -r '.[].trid.trsvcid'
00:15:47.992   10:12:42 sma.sma_discovery -- sma/discovery.sh@378 -- # grep 8009
00:15:48.315  8009
00:15:48.315    10:12:43 sma.sma_discovery -- sma/discovery.sh@379 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:48.315    10:12:43 sma.sma_discovery -- sma/discovery.sh@379 -- # jq -r '.[].namespaces | length'
00:15:48.315   10:12:43 sma.sma_discovery -- sma/discovery.sh@379 -- # [[ 1 -eq 1 ]]
00:15:48.315    10:12:43 sma.sma_discovery -- sma/discovery.sh@380 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:48.315    10:12:43 sma.sma_discovery -- sma/discovery.sh@380 -- # jq -r '.[].namespaces[0].uuid'
00:15:48.599   10:12:43 sma.sma_discovery -- sma/discovery.sh@380 -- # [[ 5794b457-f196-4b46-99f5-512c57779f1c == \5\7\9\4\b\4\5\7\-\f\1\9\6\-\4\b\4\6\-\9\9\f\5\-\5\1\2\c\5\7\7\7\9\f\1\c ]]
00:15:48.599   10:12:43 sma.sma_discovery -- sma/discovery.sh@383 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:48.599    10:12:43 sma.sma_discovery -- sma/discovery.sh@383 -- # uuid2base64 71f1e5ce-2660-4c8e-92ed-0737ef8fdd0e
00:15:48.599    10:12:43 sma.sma_discovery -- sma/common.sh@20 -- # python
00:15:48.857   10:12:43 sma.sma_discovery -- common/autotest_common.sh@652 -- # local es=0
00:15:48.857   10:12:43 sma.sma_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:48.857   10:12:43 sma.sma_discovery -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:48.857   10:12:43 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:48.857    10:12:43 sma.sma_discovery -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:48.857   10:12:43 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:48.857    10:12:43 sma.sma_discovery -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:48.857   10:12:43 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:48.857   10:12:43 sma.sma_discovery -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:48.857   10:12:43 sma.sma_discovery -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:15:48.857   10:12:43 sma.sma_discovery -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:49.116  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:49.116  I0000 00:00:1732093963.987431 1820478 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:49.116  I0000 00:00:1732093963.989196 1820478 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:54.389  [2024-11-20 10:12:49.014205] bdev_nvme.c:7571:discovery_poller: *ERROR*: Discovery[127.0.0.1:8010] timed out while attaching NVM ctrlrs
00:15:54.389  Traceback (most recent call last):
00:15:54.389    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:15:54.389      main(sys.argv[1:])
00:15:54.389    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:15:54.389      result = client.call(request['method'], request.get('params', {}))
00:15:54.389               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:15:54.389    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:15:54.389      response = func(request=json_format.ParseDict(params, input()))
00:15:54.389                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:15:54.389    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:15:54.389      return _end_unary_response_blocking(state, call, False, None)
00:15:54.390             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:15:54.390    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:15:54.390      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:15:54.390      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:15:54.390  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:15:54.390  	status = StatusCode.INTERNAL
00:15:54.390  	details = "Failed to start discovery"
00:15:54.390  	debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:8080 {created_time:"2024-11-20T10:12:49.017561649+01:00", grpc_status:13, grpc_message:"Failed to start discovery"}"
00:15:54.390  >
00:15:54.390   10:12:49 sma.sma_discovery -- common/autotest_common.sh@655 -- # es=1
00:15:54.390   10:12:49 sma.sma_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:15:54.390   10:12:49 sma.sma_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:15:54.390   10:12:49 sma.sma_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:15:54.390    10:12:49 sma.sma_discovery -- sma/discovery.sh@408 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:54.390    10:12:49 sma.sma_discovery -- sma/discovery.sh@408 -- # jq -r '. | length'
00:15:54.390   10:12:49 sma.sma_discovery -- sma/discovery.sh@408 -- # [[ 1 -eq 1 ]]
00:15:54.390   10:12:49 sma.sma_discovery -- sma/discovery.sh@409 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:54.390   10:12:49 sma.sma_discovery -- sma/discovery.sh@409 -- # jq -r '.[].trid.trsvcid'
00:15:54.390   10:12:49 sma.sma_discovery -- sma/discovery.sh@409 -- # grep 8009
00:15:54.648  8009
00:15:54.648    10:12:49 sma.sma_discovery -- sma/discovery.sh@410 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:54.648    10:12:49 sma.sma_discovery -- sma/discovery.sh@410 -- # jq -r '.[].namespaces | length'
00:15:54.906   10:12:49 sma.sma_discovery -- sma/discovery.sh@410 -- # [[ 1 -eq 1 ]]
00:15:54.906    10:12:49 sma.sma_discovery -- sma/discovery.sh@411 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:54.906    10:12:49 sma.sma_discovery -- sma/discovery.sh@411 -- # jq -r '.[].namespaces[0].uuid'
00:15:55.164   10:12:50 sma.sma_discovery -- sma/discovery.sh@411 -- # [[ 5794b457-f196-4b46-99f5-512c57779f1c == \5\7\9\4\b\4\5\7\-\f\1\9\6\-\4\b\4\6\-\9\9\f\5\-\5\1\2\c\5\7\7\7\9\f\1\c ]]
00:15:55.164    10:12:50 sma.sma_discovery -- sma/discovery.sh@414 -- # uuidgen
00:15:55.164   10:12:50 sma.sma_discovery -- sma/discovery.sh@414 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 b797e48a-c794-43d5-8968-9060f6087000 8008
00:15:55.164   10:12:50 sma.sma_discovery -- common/autotest_common.sh@652 -- # local es=0
00:15:55.164   10:12:50 sma.sma_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 b797e48a-c794-43d5-8968-9060f6087000 8008
00:15:55.164   10:12:50 sma.sma_discovery -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:15:55.164   10:12:50 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:55.164    10:12:50 sma.sma_discovery -- common/autotest_common.sh@644 -- # type -t attach_volume
00:15:55.164   10:12:50 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:55.164   10:12:50 sma.sma_discovery -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 b797e48a-c794-43d5-8968-9060f6087000 8008
00:15:55.164   10:12:50 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:15:55.164   10:12:50 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:15:55.164   10:12:50 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:55.164    10:12:50 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume b797e48a-c794-43d5-8968-9060f6087000 8008
00:15:55.164    10:12:50 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=b797e48a-c794-43d5-8968-9060f6087000
00:15:55.164    10:12:50 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:15:55.164    10:12:50 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:15:55.164     10:12:50 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 b797e48a-c794-43d5-8968-9060f6087000
00:15:55.164     10:12:50 sma.sma_discovery -- sma/common.sh@20 -- # python
00:15:55.164     10:12:50 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8008
00:15:55.164     10:12:50 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8008')
00:15:55.164     10:12:50 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:15:55.164     10:12:50 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:15:55.164     10:12:50 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:15:55.164     10:12:50 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:15:55.164     10:12:50 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 ))
00:15:55.164     10:12:50 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:15:55.164     10:12:50 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:15:55.424  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:55.424  I0000 00:00:1732093970.465243 1821210 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:55.424  I0000 00:00:1732093970.467062 1821210 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:56.363  [2024-11-20 10:12:51.480812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:15:56.363  [2024-11-20 10:12:51.480903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001fa480 with addr=127.0.0.1, port=8008
00:15:56.363  [2024-11-20 10:12:51.480980] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:15:56.363  [2024-11-20 10:12:51.481003] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:15:56.363  [2024-11-20 10:12:51.481023] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[127.0.0.1:8008] could not start discovery connect
00:15:57.738  [2024-11-20 10:12:52.483221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:15:57.738  [2024-11-20 10:12:52.483262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001fa700 with addr=127.0.0.1, port=8008
00:15:57.738  [2024-11-20 10:12:52.483318] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:15:57.738  [2024-11-20 10:12:52.483337] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:15:57.738  [2024-11-20 10:12:52.483354] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[127.0.0.1:8008] could not start discovery connect
00:15:58.674  [2024-11-20 10:12:53.485644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:15:58.674  [2024-11-20 10:12:53.485689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001fa980 with addr=127.0.0.1, port=8008
00:15:58.674  [2024-11-20 10:12:53.485752] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:15:58.674  [2024-11-20 10:12:53.485774] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:15:58.674  [2024-11-20 10:12:53.485792] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[127.0.0.1:8008] could not start discovery connect
00:15:59.610  [2024-11-20 10:12:54.488093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:15:59.610  [2024-11-20 10:12:54.488134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001fac00 with addr=127.0.0.1, port=8008
00:15:59.610  [2024-11-20 10:12:54.488188] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:15:59.611  [2024-11-20 10:12:54.488207] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:15:59.611  [2024-11-20 10:12:54.488223] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[127.0.0.1:8008] could not start discovery connect
00:16:00.546  [2024-11-20 10:12:55.490427] bdev_nvme.c:7521:discovery_poller: *ERROR*: Discovery[127.0.0.1:8008] timed out while attaching discovery ctrlr
00:16:00.546  Traceback (most recent call last):
00:16:00.546    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:16:00.546      main(sys.argv[1:])
00:16:00.546    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:16:00.546      result = client.call(request['method'], request.get('params', {}))
00:16:00.546               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:00.546    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:16:00.546      response = func(request=json_format.ParseDict(params, input()))
00:16:00.546                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:00.546    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:16:00.546      return _end_unary_response_blocking(state, call, False, None)
00:16:00.546             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:00.546    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:16:00.546      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:16:00.546      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:00.546  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:16:00.546  	status = StatusCode.INTERNAL
00:16:00.546  	details = "Failed to start discovery"
00:16:00.546  	debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:8080 {created_time:"2024-11-20T10:12:55.492763367+01:00", grpc_status:13, grpc_message:"Failed to start discovery"}"
00:16:00.546  >
00:16:00.546   10:12:55 sma.sma_discovery -- common/autotest_common.sh@655 -- # es=1
00:16:00.546   10:12:55 sma.sma_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:00.546   10:12:55 sma.sma_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:00.546   10:12:55 sma.sma_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:00.546    10:12:55 sma.sma_discovery -- sma/discovery.sh@415 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:00.546    10:12:55 sma.sma_discovery -- sma/discovery.sh@415 -- # jq -r '. | length'
00:16:00.804   10:12:55 sma.sma_discovery -- sma/discovery.sh@415 -- # [[ 1 -eq 1 ]]
00:16:00.804   10:12:55 sma.sma_discovery -- sma/discovery.sh@416 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:00.804   10:12:55 sma.sma_discovery -- sma/discovery.sh@416 -- # jq -r '.[].trid.trsvcid'
00:16:00.804   10:12:55 sma.sma_discovery -- sma/discovery.sh@416 -- # grep 8009
00:16:01.062  8009
00:16:01.062   10:12:56 sma.sma_discovery -- sma/discovery.sh@420 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock1 nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:node1 1
00:16:01.322   10:12:56 sma.sma_discovery -- sma/discovery.sh@422 -- # sleep 2
00:16:01.581  WARNING:spdk.sma.volume.volume:Found disconnected volume: 5794b457-f196-4b46-99f5-512c57779f1c
00:16:03.487    10:12:58 sma.sma_discovery -- sma/discovery.sh@423 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:03.487    10:12:58 sma.sma_discovery -- sma/discovery.sh@423 -- # jq -r '. | length'
00:16:03.744   10:12:58 sma.sma_discovery -- sma/discovery.sh@423 -- # [[ 0 -eq 0 ]]
00:16:03.744   10:12:58 sma.sma_discovery -- sma/discovery.sh@424 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock1 nvmf_subsystem_add_ns nqn.2016-06.io.spdk:node1 5794b457-f196-4b46-99f5-512c57779f1c
00:16:04.003   10:12:58 sma.sma_discovery -- sma/discovery.sh@428 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 71f1e5ce-2660-4c8e-92ed-0737ef8fdd0e 8010
00:16:04.003   10:12:58 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:16:04.003   10:12:58 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:16:04.003   10:12:58 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:04.003    10:12:58 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 71f1e5ce-2660-4c8e-92ed-0737ef8fdd0e 8010
00:16:04.003    10:12:58 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=71f1e5ce-2660-4c8e-92ed-0737ef8fdd0e
00:16:04.003    10:12:58 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:16:04.003    10:12:58 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:16:04.003     10:12:58 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 71f1e5ce-2660-4c8e-92ed-0737ef8fdd0e
00:16:04.003     10:12:58 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:04.003     10:12:59 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8010
00:16:04.003     10:12:59 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8010')
00:16:04.003     10:12:59 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:16:04.003     10:12:59 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:16:04.003     10:12:59 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:16:04.003     10:12:59 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:16:04.003     10:12:59 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 ))
00:16:04.003     10:12:59 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:16:04.003     10:12:59 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:16:04.263  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:04.264  I0000 00:00:1732093979.312325 1822311 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:04.264  I0000 00:00:1732093979.314160 1822311 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:05.649  {}
00:16:05.649   10:13:00 sma.sma_discovery -- sma/discovery.sh@429 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 257c4508-e761-414f-80e2-0f0aab2c2e67 8010
00:16:05.649   10:13:00 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:16:05.649   10:13:00 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:16:05.649   10:13:00 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:05.649    10:13:00 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 257c4508-e761-414f-80e2-0f0aab2c2e67 8010
00:16:05.649    10:13:00 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=257c4508-e761-414f-80e2-0f0aab2c2e67
00:16:05.649    10:13:00 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:16:05.649    10:13:00 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:16:05.649     10:13:00 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 257c4508-e761-414f-80e2-0f0aab2c2e67
00:16:05.649     10:13:00 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:05.649     10:13:00 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8010
00:16:05.649     10:13:00 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8010')
00:16:05.649     10:13:00 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:16:05.649     10:13:00 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:16:05.649     10:13:00 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:16:05.649     10:13:00 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:16:05.649     10:13:00 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 ))
00:16:05.649     10:13:00 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:16:05.649     10:13:00 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:16:05.907  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:05.907  I0000 00:00:1732093980.772349 1822568 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:05.907  I0000 00:00:1732093980.774224 1822568 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:05.907  {}
00:16:05.907    10:13:00 sma.sma_discovery -- sma/discovery.sh@430 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:05.907    10:13:00 sma.sma_discovery -- sma/discovery.sh@430 -- # jq -r '.[].namespaces | length'
00:16:06.165   10:13:01 sma.sma_discovery -- sma/discovery.sh@430 -- # [[ 2 -eq 2 ]]
00:16:06.165    10:13:01 sma.sma_discovery -- sma/discovery.sh@431 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:06.165    10:13:01 sma.sma_discovery -- sma/discovery.sh@431 -- # jq -r '. | length'
00:16:06.423   10:13:01 sma.sma_discovery -- sma/discovery.sh@431 -- # [[ 1 -eq 1 ]]
00:16:06.423   10:13:01 sma.sma_discovery -- sma/discovery.sh@432 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock2 nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:node2 2
00:16:06.681   10:13:01 sma.sma_discovery -- sma/discovery.sh@434 -- # sleep 2
00:16:07.616  WARNING:spdk.sma.volume.volume:Found disconnected volume: 257c4508-e761-414f-80e2-0f0aab2c2e67
00:16:08.554    10:13:03 sma.sma_discovery -- sma/discovery.sh@436 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:08.554    10:13:03 sma.sma_discovery -- sma/discovery.sh@436 -- # jq -r '.[].namespaces | length'
00:16:09.123   10:13:03 sma.sma_discovery -- sma/discovery.sh@436 -- # [[ 1 -eq 1 ]]
00:16:09.123    10:13:03 sma.sma_discovery -- sma/discovery.sh@437 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:09.123    10:13:03 sma.sma_discovery -- sma/discovery.sh@437 -- # jq -r '. | length'
00:16:09.123   10:13:04 sma.sma_discovery -- sma/discovery.sh@437 -- # [[ 1 -eq 1 ]]
00:16:09.123   10:13:04 sma.sma_discovery -- sma/discovery.sh@438 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock2 nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:node2 1
00:16:09.382   10:13:04 sma.sma_discovery -- sma/discovery.sh@440 -- # sleep 2
00:16:10.763  WARNING:spdk.sma.volume.volume:Found disconnected volume: 71f1e5ce-2660-4c8e-92ed-0737ef8fdd0e
00:16:11.698    10:13:06 sma.sma_discovery -- sma/discovery.sh@442 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:11.699    10:13:06 sma.sma_discovery -- sma/discovery.sh@442 -- # jq -r '.[].namespaces | length'
00:16:11.699   10:13:06 sma.sma_discovery -- sma/discovery.sh@442 -- # [[ 0 -eq 0 ]]
00:16:11.699    10:13:06 sma.sma_discovery -- sma/discovery.sh@443 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:11.699    10:13:06 sma.sma_discovery -- sma/discovery.sh@443 -- # jq -r '. | length'
00:16:11.956   10:13:07 sma.sma_discovery -- sma/discovery.sh@443 -- # [[ 0 -eq 0 ]]
00:16:11.956   10:13:07 sma.sma_discovery -- sma/discovery.sh@444 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock2 nvmf_subsystem_add_ns nqn.2016-06.io.spdk:node2 71f1e5ce-2660-4c8e-92ed-0737ef8fdd0e
00:16:12.524   10:13:07 sma.sma_discovery -- sma/discovery.sh@445 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock2 nvmf_subsystem_add_ns nqn.2016-06.io.spdk:node2 257c4508-e761-414f-80e2-0f0aab2c2e67
00:16:12.781   10:13:07 sma.sma_discovery -- sma/discovery.sh@447 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:local0
00:16:12.781   10:13:07 sma.sma_discovery -- sma/discovery.sh@95 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:12.781  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:12.781  I0000 00:00:1732093987.878675 1823460 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:12.781  I0000 00:00:1732093987.880554 1823460 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:12.781  {}
00:16:13.038   10:13:07 sma.sma_discovery -- sma/discovery.sh@449 -- # cleanup
00:16:13.038   10:13:07 sma.sma_discovery -- sma/discovery.sh@27 -- # killprocess 1815131
00:16:13.038   10:13:07 sma.sma_discovery -- common/autotest_common.sh@954 -- # '[' -z 1815131 ']'
00:16:13.038   10:13:07 sma.sma_discovery -- common/autotest_common.sh@958 -- # kill -0 1815131
00:16:13.038    10:13:07 sma.sma_discovery -- common/autotest_common.sh@959 -- # uname
00:16:13.038   10:13:07 sma.sma_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:16:13.038    10:13:07 sma.sma_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1815131
00:16:13.038   10:13:07 sma.sma_discovery -- common/autotest_common.sh@960 -- # process_name=python3
00:16:13.038   10:13:07 sma.sma_discovery -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:16:13.038   10:13:07 sma.sma_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1815131'
00:16:13.038  killing process with pid 1815131
00:16:13.038   10:13:07 sma.sma_discovery -- common/autotest_common.sh@973 -- # kill 1815131
00:16:13.038   10:13:07 sma.sma_discovery -- common/autotest_common.sh@978 -- # wait 1815131
00:16:13.038   10:13:08 sma.sma_discovery -- sma/discovery.sh@28 -- # killprocess 1815130
00:16:13.038   10:13:08 sma.sma_discovery -- common/autotest_common.sh@954 -- # '[' -z 1815130 ']'
00:16:13.038   10:13:08 sma.sma_discovery -- common/autotest_common.sh@958 -- # kill -0 1815130
00:16:13.038    10:13:08 sma.sma_discovery -- common/autotest_common.sh@959 -- # uname
00:16:13.038   10:13:08 sma.sma_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:16:13.038    10:13:08 sma.sma_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1815130
00:16:13.038   10:13:08 sma.sma_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:16:13.038   10:13:08 sma.sma_discovery -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:16:13.038   10:13:08 sma.sma_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1815130'
00:16:13.038  killing process with pid 1815130
00:16:13.038   10:13:08 sma.sma_discovery -- common/autotest_common.sh@973 -- # kill 1815130
00:16:13.038   10:13:08 sma.sma_discovery -- common/autotest_common.sh@978 -- # wait 1815130
00:16:14.938   10:13:10 sma.sma_discovery -- sma/discovery.sh@29 -- # killprocess 1815128
00:16:14.938   10:13:10 sma.sma_discovery -- common/autotest_common.sh@954 -- # '[' -z 1815128 ']'
00:16:14.938   10:13:10 sma.sma_discovery -- common/autotest_common.sh@958 -- # kill -0 1815128
00:16:14.938    10:13:10 sma.sma_discovery -- common/autotest_common.sh@959 -- # uname
00:16:14.938   10:13:10 sma.sma_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:16:14.938    10:13:10 sma.sma_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1815128
00:16:15.198   10:13:10 sma.sma_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:16:15.198   10:13:10 sma.sma_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:16:15.198   10:13:10 sma.sma_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1815128'
00:16:15.198  killing process with pid 1815128
00:16:15.198   10:13:10 sma.sma_discovery -- common/autotest_common.sh@973 -- # kill 1815128
00:16:15.198   10:13:10 sma.sma_discovery -- common/autotest_common.sh@978 -- # wait 1815128
00:16:17.102   10:13:12 sma.sma_discovery -- sma/discovery.sh@30 -- # killprocess 1815129
00:16:17.102   10:13:12 sma.sma_discovery -- common/autotest_common.sh@954 -- # '[' -z 1815129 ']'
00:16:17.102   10:13:12 sma.sma_discovery -- common/autotest_common.sh@958 -- # kill -0 1815129
00:16:17.102    10:13:12 sma.sma_discovery -- common/autotest_common.sh@959 -- # uname
00:16:17.102   10:13:12 sma.sma_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:16:17.102    10:13:12 sma.sma_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1815129
00:16:17.361   10:13:12 sma.sma_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:16:17.361   10:13:12 sma.sma_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:16:17.361   10:13:12 sma.sma_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1815129'
00:16:17.361  killing process with pid 1815129
00:16:17.361   10:13:12 sma.sma_discovery -- common/autotest_common.sh@973 -- # kill 1815129
00:16:17.361   10:13:12 sma.sma_discovery -- common/autotest_common.sh@978 -- # wait 1815129
00:16:19.265   10:13:14 sma.sma_discovery -- sma/discovery.sh@450 -- # trap - SIGINT SIGTERM EXIT
00:16:19.265  
00:16:19.265  real	1m6.457s
00:16:19.265  user	3m32.994s
00:16:19.265  sys	0m11.099s
00:16:19.265   10:13:14 sma.sma_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:19.265   10:13:14 sma.sma_discovery -- common/autotest_common.sh@10 -- # set +x
00:16:19.265  ************************************
00:16:19.265  END TEST sma_discovery
00:16:19.265  ************************************
00:16:19.265   10:13:14 sma -- sma/sma.sh@15 -- # run_test sma_vhost /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/vhost_blk.sh
00:16:19.265   10:13:14 sma -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:16:19.265   10:13:14 sma -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:19.265   10:13:14 sma -- common/autotest_common.sh@10 -- # set +x
00:16:19.265  ************************************
00:16:19.265  START TEST sma_vhost
00:16:19.265  ************************************
00:16:19.265   10:13:14 sma.sma_vhost -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/vhost_blk.sh
00:16:19.265  * Looking for test storage...
00:16:19.265  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:16:19.265    10:13:14 sma.sma_vhost -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:16:19.265     10:13:14 sma.sma_vhost -- common/autotest_common.sh@1693 -- # lcov --version
00:16:19.265     10:13:14 sma.sma_vhost -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:16:19.525    10:13:14 sma.sma_vhost -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:16:19.525    10:13:14 sma.sma_vhost -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:16:19.525    10:13:14 sma.sma_vhost -- scripts/common.sh@333 -- # local ver1 ver1_l
00:16:19.525    10:13:14 sma.sma_vhost -- scripts/common.sh@334 -- # local ver2 ver2_l
00:16:19.525    10:13:14 sma.sma_vhost -- scripts/common.sh@336 -- # IFS=.-:
00:16:19.525    10:13:14 sma.sma_vhost -- scripts/common.sh@336 -- # read -ra ver1
00:16:19.525    10:13:14 sma.sma_vhost -- scripts/common.sh@337 -- # IFS=.-:
00:16:19.525    10:13:14 sma.sma_vhost -- scripts/common.sh@337 -- # read -ra ver2
00:16:19.525    10:13:14 sma.sma_vhost -- scripts/common.sh@338 -- # local 'op=<'
00:16:19.525    10:13:14 sma.sma_vhost -- scripts/common.sh@340 -- # ver1_l=2
00:16:19.525    10:13:14 sma.sma_vhost -- scripts/common.sh@341 -- # ver2_l=1
00:16:19.525    10:13:14 sma.sma_vhost -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:16:19.525    10:13:14 sma.sma_vhost -- scripts/common.sh@344 -- # case "$op" in
00:16:19.525    10:13:14 sma.sma_vhost -- scripts/common.sh@345 -- # : 1
00:16:19.525    10:13:14 sma.sma_vhost -- scripts/common.sh@364 -- # (( v = 0 ))
00:16:19.525    10:13:14 sma.sma_vhost -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:16:19.525     10:13:14 sma.sma_vhost -- scripts/common.sh@365 -- # decimal 1
00:16:19.525     10:13:14 sma.sma_vhost -- scripts/common.sh@353 -- # local d=1
00:16:19.525     10:13:14 sma.sma_vhost -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:16:19.525     10:13:14 sma.sma_vhost -- scripts/common.sh@355 -- # echo 1
00:16:19.525    10:13:14 sma.sma_vhost -- scripts/common.sh@365 -- # ver1[v]=1
00:16:19.525     10:13:14 sma.sma_vhost -- scripts/common.sh@366 -- # decimal 2
00:16:19.525     10:13:14 sma.sma_vhost -- scripts/common.sh@353 -- # local d=2
00:16:19.525     10:13:14 sma.sma_vhost -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:16:19.525     10:13:14 sma.sma_vhost -- scripts/common.sh@355 -- # echo 2
00:16:19.525    10:13:14 sma.sma_vhost -- scripts/common.sh@366 -- # ver2[v]=2
00:16:19.525    10:13:14 sma.sma_vhost -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:16:19.525    10:13:14 sma.sma_vhost -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:16:19.525    10:13:14 sma.sma_vhost -- scripts/common.sh@368 -- # return 0
00:16:19.525    10:13:14 sma.sma_vhost -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:16:19.525    10:13:14 sma.sma_vhost -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:16:19.525  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:19.525  		--rc genhtml_branch_coverage=1
00:16:19.525  		--rc genhtml_function_coverage=1
00:16:19.525  		--rc genhtml_legend=1
00:16:19.525  		--rc geninfo_all_blocks=1
00:16:19.525  		--rc geninfo_unexecuted_blocks=1
00:16:19.525  		
00:16:19.525  		'
00:16:19.525    10:13:14 sma.sma_vhost -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:16:19.525  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:19.525  		--rc genhtml_branch_coverage=1
00:16:19.525  		--rc genhtml_function_coverage=1
00:16:19.525  		--rc genhtml_legend=1
00:16:19.525  		--rc geninfo_all_blocks=1
00:16:19.525  		--rc geninfo_unexecuted_blocks=1
00:16:19.525  		
00:16:19.525  		'
00:16:19.525    10:13:14 sma.sma_vhost -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:16:19.525  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:19.525  		--rc genhtml_branch_coverage=1
00:16:19.525  		--rc genhtml_function_coverage=1
00:16:19.525  		--rc genhtml_legend=1
00:16:19.525  		--rc geninfo_all_blocks=1
00:16:19.525  		--rc geninfo_unexecuted_blocks=1
00:16:19.525  		
00:16:19.525  		'
00:16:19.525    10:13:14 sma.sma_vhost -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:16:19.525  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:19.525  		--rc genhtml_branch_coverage=1
00:16:19.525  		--rc genhtml_function_coverage=1
00:16:19.525  		--rc genhtml_legend=1
00:16:19.525  		--rc geninfo_all_blocks=1
00:16:19.525  		--rc geninfo_unexecuted_blocks=1
00:16:19.525  		
00:16:19.526  		'
00:16:19.526   10:13:14 sma.sma_vhost -- sma/vhost_blk.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:16:19.526    10:13:14 sma.sma_vhost -- vhost/common.sh@6 -- # : false
00:16:19.526    10:13:14 sma.sma_vhost -- vhost/common.sh@7 -- # : /root/vhost_test
00:16:19.526    10:13:14 sma.sma_vhost -- vhost/common.sh@8 -- # : /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:16:19.526    10:13:14 sma.sma_vhost -- vhost/common.sh@9 -- # : qemu-img
00:16:19.526     10:13:14 sma.sma_vhost -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:16:19.526    10:13:14 sma.sma_vhost -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:16:19.526    10:13:14 sma.sma_vhost -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:16:19.526    10:13:14 sma.sma_vhost -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:16:19.526    10:13:14 sma.sma_vhost -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:16:19.526    10:13:14 sma.sma_vhost -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:16:19.526    10:13:14 sma.sma_vhost -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:16:19.526      10:13:14 sma.sma_vhost -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/vhost_blk.sh
00:16:19.526     10:13:14 sma.sma_vhost -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:16:19.526    10:13:14 sma.sma_vhost -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:16:19.526    10:13:14 sma.sma_vhost -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:16:19.526    10:13:14 sma.sma_vhost -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:16:19.526    10:13:14 sma.sma_vhost -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:16:19.526    10:13:14 sma.sma_vhost -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:16:19.526    10:13:14 sma.sma_vhost -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:16:19.526     10:13:14 sma.sma_vhost -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:16:19.526     10:13:14 sma.sma_vhost -- common/autotest.config@2 -- # vhost_0_main_core=0
00:16:19.526     10:13:14 sma.sma_vhost -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:16:19.526     10:13:14 sma.sma_vhost -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:16:19.526     10:13:14 sma.sma_vhost -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:16:19.526     10:13:14 sma.sma_vhost -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:16:19.526     10:13:14 sma.sma_vhost -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:16:19.526     10:13:14 sma.sma_vhost -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:16:19.526     10:13:14 sma.sma_vhost -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:16:19.526     10:13:14 sma.sma_vhost -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:16:19.526     10:13:14 sma.sma_vhost -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:16:19.526     10:13:14 sma.sma_vhost -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:16:19.526     10:13:14 sma.sma_vhost -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:16:19.526     10:13:14 sma.sma_vhost -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:16:19.526     10:13:14 sma.sma_vhost -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:16:19.526     10:13:14 sma.sma_vhost -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:16:19.526     10:13:14 sma.sma_vhost -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:16:19.526     10:13:14 sma.sma_vhost -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:16:19.526     10:13:14 sma.sma_vhost -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:16:19.526     10:13:14 sma.sma_vhost -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:16:19.526     10:13:14 sma.sma_vhost -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:16:19.526     10:13:14 sma.sma_vhost -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:16:19.526     10:13:14 sma.sma_vhost -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:16:19.526     10:13:14 sma.sma_vhost -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:16:19.526     10:13:14 sma.sma_vhost -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:16:19.526     10:13:14 sma.sma_vhost -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:16:19.526    10:13:14 sma.sma_vhost -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:16:19.526     10:13:14 sma.sma_vhost -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:16:19.526     10:13:14 sma.sma_vhost -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:16:19.526     10:13:14 sma.sma_vhost -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:16:19.526     10:13:14 sma.sma_vhost -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:16:19.526     10:13:14 sma.sma_vhost -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:16:19.526     10:13:14 sma.sma_vhost -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:16:19.526      10:13:14 sma.sma_vhost -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:16:19.526       10:13:14 sma.sma_vhost -- scheduler/cgroups.sh@244 -- # check_cgroup
00:16:19.526       10:13:14 sma.sma_vhost -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:16:19.526       10:13:14 sma.sma_vhost -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:16:19.526       10:13:14 sma.sma_vhost -- scheduler/cgroups.sh@10 -- # echo 2
00:16:19.526      10:13:14 sma.sma_vhost -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:16:19.526   10:13:14 sma.sma_vhost -- sma/vhost_blk.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:16:19.526   10:13:14 sma.sma_vhost -- sma/vhost_blk.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:16:19.526   10:13:14 sma.sma_vhost -- sma/vhost_blk.sh@49 -- # vm_no=0
00:16:19.526   10:13:14 sma.sma_vhost -- sma/vhost_blk.sh@50 -- # bus_size=32
00:16:19.526   10:13:14 sma.sma_vhost -- sma/vhost_blk.sh@52 -- # timing_enter setup_vm
00:16:19.526   10:13:14 sma.sma_vhost -- common/autotest_common.sh@726 -- # xtrace_disable
00:16:19.526   10:13:14 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:19.526   10:13:14 sma.sma_vhost -- sma/vhost_blk.sh@54 -- # vm_setup --force=0 --disk-type=virtio '--qemu-args=-qmp tcp:localhost:9090,server,nowait -device pci-bridge,chassis_nr=1,id=pci.spdk.0 -device pci-bridge,chassis_nr=2,id=pci.spdk.1' --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@518 -- # xtrace_disable
00:16:19.526   10:13:14 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:19.526  INFO: Creating new VM in /root/vhost_test/vms/0
00:16:19.526  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:16:19.526  INFO: TASK MASK: 1-2
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@671 -- # local node_num=0
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@672 -- # local boot_disk_present=false
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@60 -- # local verbose_out
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@61 -- # false
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@62 -- # verbose_out=
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@69 -- # local msg_type=INFO
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@70 -- # shift
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:16:19.526  INFO: NUMA NODE: 0
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@677 -- # [[ -n '' ]]
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@686 -- # [[ -z '' ]]
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@691 -- # (( 0 == 0 ))
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@691 -- # [[ virtio == virtio* ]]
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@692 -- # disks=("default_virtio.img")
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@701 -- # IFS=,
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@701 -- # read -r disk disk_type _
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@702 -- # [[ -z '' ]]
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@702 -- # disk_type=virtio
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@704 -- # case $disk_type in
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@706 -- # local raw_name=RAWSCSI
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@707 -- # local raw_disk=/root/vhost_test/vms/0/test.img
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@710 -- # [[ -f default_virtio.img ]]
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@714 -- # notice 'Creating Virtio disc /root/vhost_test/vms/0/test.img'
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@94 -- # message INFO 'Creating Virtio disc /root/vhost_test/vms/0/test.img'
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@60 -- # local verbose_out
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@61 -- # false
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@62 -- # verbose_out=
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@69 -- # local msg_type=INFO
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@70 -- # shift
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@71 -- # echo -e 'INFO: Creating Virtio disc /root/vhost_test/vms/0/test.img'
00:16:19.526  INFO: Creating Virtio disc /root/vhost_test/vms/0/test.img
00:16:19.526   10:13:14 sma.sma_vhost -- vhost/common.sh@715 -- # dd if=/dev/zero of=/root/vhost_test/vms/0/test.img bs=1024k count=1024
00:16:20.096  1024+0 records in
00:16:20.096  1024+0 records out
00:16:20.096  1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.475917 s, 2.3 GB/s
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@718 -- # cmd+=(-device "virtio-scsi-pci,num_queues=$queue_number")
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@719 -- # cmd+=(-device "scsi-hd,drive=hd$i,vendor=$raw_name")
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@720 -- # cmd+=(-drive "if=none,id=hd$i,file=$raw_disk,format=raw$raw_cache")
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@780 -- # [[ -n '' ]]
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@785 -- # (( 1 ))
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@785 -- # cmd+=("${qemu_args[@]}")
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/0/run.sh'
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/0/run.sh'
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@60 -- # local verbose_out
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@61 -- # false
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@62 -- # verbose_out=
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@69 -- # local msg_type=INFO
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@70 -- # shift
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/0/run.sh'
00:16:20.096  INFO: Saving to /root/vhost_test/vms/0/run.sh
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@787 -- # cat
00:16:20.096    10:13:14 sma.sma_vhost -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 1-2 /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :100 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10002,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/0/qemu.pid -serial file:/root/vhost_test/vms/0/serial.log -D /root/vhost_test/vms/0/qemu.log -chardev file,path=/root/vhost_test/vms/0/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10000-:22,hostfwd=tcp::10001-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device virtio-scsi-pci,num_queues=2 -device scsi-hd,drive=hd,vendor=RAWSCSI -drive if=none,id=hd,file=/root/vhost_test/vms/0/test.img,format=raw '-qmp tcp:localhost:9090,server,nowait -device pci-bridge,chassis_nr=1,id=pci.spdk.0 -device pci-bridge,chassis_nr=2,id=pci.spdk.1'
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/0/run.sh
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@827 -- # echo 10000
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@828 -- # echo 10001
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@829 -- # echo 10002
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/0/migration_port
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@832 -- # [[ -z '' ]]
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@834 -- # echo 10004
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@835 -- # echo 100
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@837 -- # [[ -z '' ]]
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@838 -- # [[ -z '' ]]
00:16:20.096   10:13:14 sma.sma_vhost -- sma/vhost_blk.sh@59 -- # vm_run 0
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@843 -- # local run_all=false
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@844 -- # local vms_to_run=
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@846 -- # getopts a-: optchar
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@856 -- # false
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@859 -- # shift 0
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@860 -- # for vm in "$@"
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@861 -- # vm_num_is_valid 0
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/0/run.sh ]]
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@866 -- # vms_to_run+=' 0'
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@871 -- # vm_is_running 0
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@373 -- # return 1
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/0/run.sh'
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/0/run.sh'
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@60 -- # local verbose_out
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@61 -- # false
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@62 -- # verbose_out=
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@69 -- # local msg_type=INFO
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@70 -- # shift
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/0/run.sh'
00:16:20.096  INFO: running /root/vhost_test/vms/0/run.sh
00:16:20.096   10:13:14 sma.sma_vhost -- vhost/common.sh@877 -- # /root/vhost_test/vms/0/run.sh
00:16:20.096  Running VM in /root/vhost_test/vms/0
00:16:20.356  Waiting for QEMU pid file
00:16:21.294  === qemu.log ===
00:16:21.294  === qemu.log ===
00:16:21.294   10:13:16 sma.sma_vhost -- sma/vhost_blk.sh@60 -- # vm_wait_for_boot 300 0
00:16:21.294   10:13:16 sma.sma_vhost -- vhost/common.sh@913 -- # assert_number 300
00:16:21.294   10:13:16 sma.sma_vhost -- vhost/common.sh@281 -- # [[ 300 =~ [0-9]+ ]]
00:16:21.294   10:13:16 sma.sma_vhost -- vhost/common.sh@281 -- # return 0
00:16:21.294   10:13:16 sma.sma_vhost -- vhost/common.sh@915 -- # xtrace_disable
00:16:21.294   10:13:16 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:21.294  INFO: Waiting for VMs to boot
00:16:21.294  INFO: waiting for VM0 (/root/vhost_test/vms/0)
00:16:43.263  
00:16:43.263  INFO: VM0 ready
00:16:43.263  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:16:43.263  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:16:43.263  INFO: all VMs ready
00:16:43.263   10:13:38 sma.sma_vhost -- vhost/common.sh@973 -- # return 0
00:16:43.263   10:13:38 sma.sma_vhost -- sma/vhost_blk.sh@61 -- # timing_exit setup_vm
00:16:43.263   10:13:38 sma.sma_vhost -- common/autotest_common.sh@732 -- # xtrace_disable
00:16:43.263   10:13:38 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:43.263   10:13:38 sma.sma_vhost -- sma/vhost_blk.sh@64 -- # vhostpid=1827183
00:16:43.263   10:13:38 sma.sma_vhost -- sma/vhost_blk.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/vhost -S /var/tmp -m 0x3 --wait-for-rpc
00:16:43.263   10:13:38 sma.sma_vhost -- sma/vhost_blk.sh@66 -- # waitforlisten 1827183
00:16:43.263   10:13:38 sma.sma_vhost -- common/autotest_common.sh@835 -- # '[' -z 1827183 ']'
00:16:43.263   10:13:38 sma.sma_vhost -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:16:43.263   10:13:38 sma.sma_vhost -- common/autotest_common.sh@840 -- # local max_retries=100
00:16:43.263   10:13:38 sma.sma_vhost -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:16:43.263  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:16:43.263   10:13:38 sma.sma_vhost -- common/autotest_common.sh@844 -- # xtrace_disable
00:16:43.263   10:13:38 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:43.263  [2024-11-20 10:13:38.275420] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:16:43.264  [2024-11-20 10:13:38.275587] [ DPDK EAL parameters: vhost --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1827183 ]
00:16:43.264  EAL: No free 2048 kB hugepages reported on node 1
00:16:43.526  [2024-11-20 10:13:38.407785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:16:43.526  [2024-11-20 10:13:38.528778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:16:43.526  [2024-11-20 10:13:38.528780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:16:44.460   10:13:39 sma.sma_vhost -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:16:44.460   10:13:39 sma.sma_vhost -- common/autotest_common.sh@868 -- # return 0
00:16:44.460   10:13:39 sma.sma_vhost -- sma/vhost_blk.sh@69 -- # rpc_cmd dpdk_cryptodev_scan_accel_module
00:16:44.460   10:13:39 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:44.460   10:13:39 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:44.460   10:13:39 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:44.460   10:13:39 sma.sma_vhost -- sma/vhost_blk.sh@70 -- # rpc_cmd dpdk_cryptodev_set_driver -d crypto_aesni_mb
00:16:44.460   10:13:39 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:44.460   10:13:39 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:44.460  [2024-11-20 10:13:39.231461] accel_dpdk_cryptodev.c: 224:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb
00:16:44.460   10:13:39 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:44.460   10:13:39 sma.sma_vhost -- sma/vhost_blk.sh@71 -- # rpc_cmd accel_assign_opc -o encrypt -m dpdk_cryptodev
00:16:44.460   10:13:39 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:44.460   10:13:39 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:44.460  [2024-11-20 10:13:39.239466] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev
00:16:44.460   10:13:39 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:44.460   10:13:39 sma.sma_vhost -- sma/vhost_blk.sh@72 -- # rpc_cmd accel_assign_opc -o decrypt -m dpdk_cryptodev
00:16:44.460   10:13:39 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:44.460   10:13:39 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:44.460  [2024-11-20 10:13:39.247515] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev
00:16:44.460   10:13:39 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:44.460   10:13:39 sma.sma_vhost -- sma/vhost_blk.sh@73 -- # rpc_cmd framework_start_init
00:16:44.460   10:13:39 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:44.460   10:13:39 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:44.460  [2024-11-20 10:13:39.456182] accel_dpdk_cryptodev.c:1179:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 1
00:16:44.719   10:13:39 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:44.719   10:13:39 sma.sma_vhost -- sma/vhost_blk.sh@93 -- # smapid=1827415
00:16:44.719   10:13:39 sma.sma_vhost -- sma/vhost_blk.sh@96 -- # sma_waitforlisten
00:16:44.719   10:13:39 sma.sma_vhost -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:16:44.719   10:13:39 sma.sma_vhost -- sma/common.sh@8 -- # local sma_port=8080
00:16:44.719   10:13:39 sma.sma_vhost -- sma/common.sh@10 -- # (( i = 0 ))
00:16:44.719   10:13:39 sma.sma_vhost -- sma/common.sh@10 -- # (( i < 5 ))
00:16:44.719   10:13:39 sma.sma_vhost -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:16:44.719    10:13:39 sma.sma_vhost -- sma/vhost_blk.sh@75 -- # cat
00:16:44.719   10:13:39 sma.sma_vhost -- sma/vhost_blk.sh@75 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:16:44.719   10:13:39 sma.sma_vhost -- sma/common.sh@14 -- # sleep 1s
00:16:44.978  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:44.978  I0000 00:00:1732094019.880827 1827415 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:45.546   10:13:40 sma.sma_vhost -- sma/common.sh@10 -- # (( i++ ))
00:16:45.546   10:13:40 sma.sma_vhost -- sma/common.sh@10 -- # (( i < 5 ))
00:16:45.546   10:13:40 sma.sma_vhost -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:16:45.805   10:13:40 sma.sma_vhost -- sma/common.sh@12 -- # return 0
00:16:45.806    10:13:40 sma.sma_vhost -- sma/vhost_blk.sh@99 -- # vm_exec 0 'lsblk | grep -E "^vd." | wc -l'
00:16:45.806    10:13:40 sma.sma_vhost -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:16:45.806    10:13:40 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:16:45.806    10:13:40 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:16:45.806    10:13:40 sma.sma_vhost -- vhost/common.sh@338 -- # local vm_num=0
00:16:45.806    10:13:40 sma.sma_vhost -- vhost/common.sh@339 -- # shift
00:16:45.806     10:13:40 sma.sma_vhost -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:16:45.806     10:13:40 sma.sma_vhost -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:16:45.806     10:13:40 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:16:45.806     10:13:40 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:16:45.806     10:13:40 sma.sma_vhost -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:16:45.806     10:13:40 sma.sma_vhost -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:16:45.806    10:13:40 sma.sma_vhost -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'lsblk | grep -E "^vd." | wc -l'
00:16:45.806  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:16:46.374   10:13:41 sma.sma_vhost -- sma/vhost_blk.sh@99 -- # [[ 0 -eq 0 ]]
00:16:46.374   10:13:41 sma.sma_vhost -- sma/vhost_blk.sh@102 -- # rpc_cmd bdev_null_create null0 100 4096
00:16:46.374   10:13:41 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:46.374   10:13:41 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:46.374  null0
00:16:46.374   10:13:41 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:46.374   10:13:41 sma.sma_vhost -- sma/vhost_blk.sh@103 -- # rpc_cmd bdev_null_create null1 100 4096
00:16:46.374   10:13:41 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:46.374   10:13:41 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:46.374  null1
00:16:46.374   10:13:41 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:46.374    10:13:41 sma.sma_vhost -- sma/vhost_blk.sh@104 -- # rpc_cmd bdev_get_bdevs -b null0
00:16:46.374    10:13:41 sma.sma_vhost -- sma/vhost_blk.sh@104 -- # jq -r '.[].uuid'
00:16:46.374    10:13:41 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:46.374    10:13:41 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:46.374    10:13:41 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:46.374   10:13:41 sma.sma_vhost -- sma/vhost_blk.sh@104 -- # uuid=822ce54e-4577-464b-a58e-36213d6b54db
00:16:46.374    10:13:41 sma.sma_vhost -- sma/vhost_blk.sh@105 -- # rpc_cmd bdev_get_bdevs -b null1
00:16:46.374    10:13:41 sma.sma_vhost -- sma/vhost_blk.sh@105 -- # jq -r '.[].uuid'
00:16:46.374    10:13:41 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:46.374    10:13:41 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:46.374    10:13:41 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:46.374   10:13:41 sma.sma_vhost -- sma/vhost_blk.sh@105 -- # uuid2=6ceebb8f-2b48-4b5b-a1ee-34d5e5259aa1
00:16:46.374    10:13:41 sma.sma_vhost -- sma/vhost_blk.sh@108 -- # create_device 0 822ce54e-4577-464b-a58e-36213d6b54db
00:16:46.374    10:13:41 sma.sma_vhost -- sma/vhost_blk.sh@108 -- # jq -r .handle
00:16:46.374    10:13:41 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:46.374     10:13:41 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 822ce54e-4577-464b-a58e-36213d6b54db
00:16:46.374     10:13:41 sma.sma_vhost -- sma/common.sh@20 -- # python
00:16:46.633  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:46.633  I0000 00:00:1732094021.570822 1827599 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:46.633  I0000 00:00:1732094021.572672 1827599 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:46.634  I0000 00:00:1732094021.574350 1827625 subchannel.cc:806] subchannel 0x561564e0e180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x561564d1b1c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x561564dbf460, grpc.internal.client_channel_call_destination=0x7f6ebf296390, grpc.internal.event_engine=0x561564d81440, grpc.internal.security_connector=0x561564df5d00, grpc.internal.subchannel_pool=0x561564df5c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x561564a3e2f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:13:41.573773567+01:00"}), backing off for 1000 ms
00:16:46.634  VHOST_CONFIG: (/var/tmp/sma-0) vhost-user server: socket created, fd: 228
00:16:46.634  VHOST_CONFIG: (/var/tmp/sma-0) binding succeeded
00:16:47.567  VHOST_CONFIG: (/var/tmp/sma-0) new vhost user connection is 59
00:16:47.567  VHOST_CONFIG: (/var/tmp/sma-0) new device, handle is 0
00:16:47.567  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:16:47.567  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_PROTOCOL_FEATURES
00:16:47.567  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_PROTOCOL_FEATURES
00:16:47.567  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Vhost-user protocol features: 0x11ebf
00:16:47.567  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_QUEUE_NUM
00:16:47.567  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_BACKEND_REQ_FD
00:16:47.567  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_OWNER
00:16:47.567  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:16:47.567  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:16:47.567  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:232
00:16:47.567  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:16:47.567  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:16:47.567  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:233
00:16:47.567  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:16:47.567  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_CONFIG
00:16:47.567   10:13:42 sma.sma_vhost -- sma/vhost_blk.sh@108 -- # devid0=virtio_blk:sma-0
00:16:47.567   10:13:42 sma.sma_vhost -- sma/vhost_blk.sh@109 -- # rpc_cmd vhost_get_controllers -n sma-0
00:16:47.567   10:13:42 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:47.567   10:13:42 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:47.567  [
00:16:47.567  {
00:16:47.567  "ctrlr": "sma-0",
00:16:47.567  "cpumask": "0x3",
00:16:47.567  "delay_base_us": 0,
00:16:47.567  "iops_threshold": 60000,
00:16:47.567  "socket": "/var/tmp/sma-0",
00:16:47.567  "sessions": [
00:16:47.567  {
00:16:47.567  "vid": 0,
00:16:47.567  "id": 0,
00:16:47.567  "name": "sma-0s0",
00:16:47.567  "started": false,
00:16:47.567  "max_queues": 0,
00:16:47.567  "inflight_task_cnt": 0
00:16:47.567  }
00:16:47.567  ],
00:16:47.567  "backend_specific": {
00:16:47.568  "block": {
00:16:47.568  "readonly": false,
00:16:47.568  "bdev": "null0",
00:16:47.568  "transport": "vhost_user_blk"
00:16:47.568  }
00:16:47.568  }
00:16:47.568  }
00:16:47.568  ]
00:16:47.568   10:13:42 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:47.568    10:13:42 sma.sma_vhost -- sma/vhost_blk.sh@111 -- # create_device 1 6ceebb8f-2b48-4b5b-a1ee-34d5e5259aa1
00:16:47.568    10:13:42 sma.sma_vhost -- sma/vhost_blk.sh@111 -- # jq -r .handle
00:16:47.568    10:13:42 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:47.568     10:13:42 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 6ceebb8f-2b48-4b5b-a1ee-34d5e5259aa1
00:16:47.568     10:13:42 sma.sma_vhost -- sma/common.sh@20 -- # python
00:16:47.825  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:16:47.825  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150005446
00:16:47.825  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:16:47.825  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:16:47.825  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000008):
00:16:47.825  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:16:47.825  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:16:47.825  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:16:47.825  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:16:47.825  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:16:47.825  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:16:47.825  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:16:47.825  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_INFLIGHT_FD
00:16:47.825  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd num_queues: 2
00:16:47.825  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd queue_size: 128
00:16:47.825  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_size: 4224
00:16:47.825  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_offset: 0
00:16:47.825  VHOST_CONFIG: (/var/tmp/sma-0) send inflight fd: 58
00:16:47.825  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_INFLIGHT_FD
00:16:47.825  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_size: 4224
00:16:47.825  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_offset: 0
00:16:47.825  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd num_queues: 2
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd queue_size: 128
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd fd: 234
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd pervq_inflight_size: 2112
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:58
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:232
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150005446
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_MEM_TABLE
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) guest memory region size: 0x40000000
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest physical addr: 0x0
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest virtual  addr: 0x7fc977e00000
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) 	 host  virtual  addr: 0x7f2ebb600000
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap addr : 0x7f2ebb600000
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap size : 0x40000000
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap align: 0x200000
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap off  : 0x0
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 last_used_idx:0 last_avail_idx:0.
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:0 file:235
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 last_used_idx:0 last_avail_idx:0.
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:1 file:236
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 0
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 1
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x0000000f):
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 1
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 1
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 1
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:16:47.826  VHOST_CONFIG: (/var/tmp/sma-0) virtio is now ready for processing.
00:16:48.086  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:48.086  I0000 00:00:1732094022.959827 1827774 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:48.086  I0000 00:00:1732094022.961558 1827774 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:48.086  I0000 00:00:1732094022.963114 1827885 subchannel.cc:806] subchannel 0x55f2112af180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55f2111bc1c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55f211260460, grpc.internal.client_channel_call_destination=0x7f0cc039c390, grpc.internal.event_engine=0x55f211222440, grpc.internal.security_connector=0x55f211118650, grpc.internal.subchannel_pool=0x55f211296c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55f210edf2f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:13:42.962620712+01:00"}), backing off for 999 ms
00:16:48.086  VHOST_CONFIG: (/var/tmp/sma-1) vhost-user server: socket created, fd: 239
00:16:48.086  VHOST_CONFIG: (/var/tmp/sma-1) binding succeeded
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) new vhost user connection is 237
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) new device, handle is 1
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_FEATURES
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_PROTOCOL_FEATURES
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_PROTOCOL_FEATURES
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) negotiated Vhost-user protocol features: 0x11ebf
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_QUEUE_NUM
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_BACKEND_REQ_FD
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_OWNER
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_FEATURES
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_CALL
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) vring call idx:0 file:241
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ERR
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_CALL
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) vring call idx:1 file:242
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ERR
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_CONFIG
00:16:49.025   10:13:43 sma.sma_vhost -- sma/vhost_blk.sh@111 -- # devid1=virtio_blk:sma-1
00:16:49.025   10:13:43 sma.sma_vhost -- sma/vhost_blk.sh@112 -- # rpc_cmd vhost_get_controllers -n sma-0
00:16:49.025   10:13:43 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:49.025   10:13:43 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:49.025  [
00:16:49.025  {
00:16:49.025  "ctrlr": "sma-0",
00:16:49.025  "cpumask": "0x3",
00:16:49.025  "delay_base_us": 0,
00:16:49.025  "iops_threshold": 60000,
00:16:49.025  "socket": "/var/tmp/sma-0",
00:16:49.025  "sessions": [
00:16:49.025  {
00:16:49.025  "vid": 0,
00:16:49.025  "id": 0,
00:16:49.025  "name": "sma-0s0",
00:16:49.025  "started": true,
00:16:49.025  "max_queues": 2,
00:16:49.025  "inflight_task_cnt": 0
00:16:49.025  }
00:16:49.025  ],
00:16:49.025  "backend_specific": {
00:16:49.025  "block": {
00:16:49.025  "readonly": false,
00:16:49.025  "bdev": "null0",
00:16:49.025  "transport": "vhost_user_blk"
00:16:49.025  }
00:16:49.025  }
00:16:49.025  }
00:16:49.025  ]
00:16:49.025   10:13:43 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:49.025   10:13:43 sma.sma_vhost -- sma/vhost_blk.sh@113 -- # rpc_cmd vhost_get_controllers -n sma-1
00:16:49.025   10:13:43 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:49.025   10:13:43 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:49.025  [
00:16:49.025  {
00:16:49.025  "ctrlr": "sma-1",
00:16:49.025  "cpumask": "0x3",
00:16:49.025  "delay_base_us": 0,
00:16:49.025  "iops_threshold": 60000,
00:16:49.025  "socket": "/var/tmp/sma-1",
00:16:49.025  "sessions": [
00:16:49.025  {
00:16:49.025  "vid": 1,
00:16:49.025  "id": 0,
00:16:49.025  "name": "sma-1s1",
00:16:49.025  "started": false,
00:16:49.025  "max_queues": 0,
00:16:49.025  "inflight_task_cnt": 0
00:16:49.025  }
00:16:49.025  ],
00:16:49.025  "backend_specific": {
00:16:49.025  "block": {
00:16:49.025  "readonly": false,
00:16:49.025  "bdev": "null1",
00:16:49.025  "transport": "vhost_user_blk"
00:16:49.025  }
00:16:49.025  }
00:16:49.025  }
00:16:49.025  ]
00:16:49.025   10:13:43 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:49.025   10:13:43 sma.sma_vhost -- sma/vhost_blk.sh@114 -- # [[ virtio_blk:sma-0 != \v\i\r\t\i\o\_\b\l\k\:\s\m\a\-\1 ]]
00:16:49.025    10:13:43 sma.sma_vhost -- sma/vhost_blk.sh@117 -- # rpc_cmd vhost_get_controllers
00:16:49.025    10:13:43 sma.sma_vhost -- sma/vhost_blk.sh@117 -- # jq -r '. | length'
00:16:49.025    10:13:43 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:49.025    10:13:43 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:49.025    10:13:43 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:49.025   10:13:43 sma.sma_vhost -- sma/vhost_blk.sh@117 -- # [[ 2 -eq 2 ]]
00:16:49.025    10:13:43 sma.sma_vhost -- sma/vhost_blk.sh@121 -- # create_device 0 822ce54e-4577-464b-a58e-36213d6b54db
00:16:49.025    10:13:43 sma.sma_vhost -- sma/vhost_blk.sh@121 -- # jq -r .handle
00:16:49.025    10:13:43 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:49.025     10:13:43 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 822ce54e-4577-464b-a58e-36213d6b54db
00:16:49.025     10:13:43 sma.sma_vhost -- sma/common.sh@20 -- # python
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_FEATURES
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) negotiated Virtio features: 0x150005446
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_STATUS
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_STATUS
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) new device status(0x00000008):
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) 	-RESET: 0
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) 	-ACKNOWLEDGE: 0
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) 	-DRIVER: 0
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) 	-FEATURES_OK: 1
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) 	-DRIVER_OK: 0
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) 	-DEVICE_NEED_RESET: 0
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) 	-FAILED: 0
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_INFLIGHT_FD
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) get_inflight_fd num_queues: 2
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) get_inflight_fd queue_size: 128
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) send inflight mmap_size: 4224
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) send inflight mmap_offset: 0
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) send inflight fd: 60
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_INFLIGHT_FD
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) set_inflight_fd mmap_size: 4224
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) set_inflight_fd mmap_offset: 0
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) set_inflight_fd num_queues: 2
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) set_inflight_fd queue_size: 128
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) set_inflight_fd fd: 243
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) set_inflight_fd pervq_inflight_size: 2112
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_CALL
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) vring call idx:0 file:60
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_CALL
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) vring call idx:1 file:241
00:16:49.025  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_FEATURES
00:16:49.026  VHOST_CONFIG: (/var/tmp/sma-1) negotiated Virtio features: 0x150005446
00:16:49.026  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_STATUS
00:16:49.026  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_MEM_TABLE
00:16:49.026  VHOST_CONFIG: (/var/tmp/sma-1) guest memory region size: 0x40000000
00:16:49.026  VHOST_CONFIG: (/var/tmp/sma-1) 	 guest physical addr: 0x0
00:16:49.026  VHOST_CONFIG: (/var/tmp/sma-1) 	 guest virtual  addr: 0x7fc977e00000
00:16:49.026  VHOST_CONFIG: (/var/tmp/sma-1) 	 host  virtual  addr: 0x7f2e7b600000
00:16:49.026  VHOST_CONFIG: (/var/tmp/sma-1) 	 mmap addr : 0x7f2e7b600000
00:16:49.026  VHOST_CONFIG: (/var/tmp/sma-1) 	 mmap size : 0x40000000
00:16:49.026  VHOST_CONFIG: (/var/tmp/sma-1) 	 mmap align: 0x200000
00:16:49.026  VHOST_CONFIG: (/var/tmp/sma-1) 	 mmap off  : 0x0
00:16:49.026  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_NUM
00:16:49.026  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_BASE
00:16:49.026  VHOST_CONFIG: (/var/tmp/sma-1) vring base idx:0 last_used_idx:0 last_avail_idx:0.
00:16:49.026  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ADDR
00:16:49.026  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_KICK
00:16:49.026  VHOST_CONFIG: (/var/tmp/sma-1) vring kick idx:0 file:244
00:16:49.026  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_NUM
00:16:49.026  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_BASE
00:16:49.026  VHOST_CONFIG: (/var/tmp/sma-1) vring base idx:1 last_used_idx:0 last_avail_idx:0.
00:16:49.026  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ADDR
00:16:49.026  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_KICK
00:16:49.026  VHOST_CONFIG: (/var/tmp/sma-1) vring kick idx:1 file:245
00:16:49.026  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ENABLE
00:16:49.026  VHOST_CONFIG: (/var/tmp/sma-1) set queue enable: 1 to qp idx: 0
00:16:49.026  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ENABLE
00:16:49.026  VHOST_CONFIG: (/var/tmp/sma-1) set queue enable: 1 to qp idx: 1
00:16:49.026  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_STATUS
00:16:49.026  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_STATUS
00:16:49.026  VHOST_CONFIG: (/var/tmp/sma-1) new device status(0x0000000f):
00:16:49.026  VHOST_CONFIG: (/var/tmp/sma-1) 	-RESET: 0
00:16:49.026  VHOST_CONFIG: (/var/tmp/sma-1) 	-ACKNOWLEDGE: 1
00:16:49.026  VHOST_CONFIG: (/var/tmp/sma-1) 	-DRIVER: 1
00:16:49.026  VHOST_CONFIG: (/var/tmp/sma-1) 	-FEATURES_OK: 1
00:16:49.026  VHOST_CONFIG: (/var/tmp/sma-1) 	-DRIVER_OK: 1
00:16:49.026  VHOST_CONFIG: (/var/tmp/sma-1) 	-DEVICE_NEED_RESET: 0
00:16:49.026  VHOST_CONFIG: (/var/tmp/sma-1) 	-FAILED: 0
00:16:49.026  VHOST_CONFIG: (/var/tmp/sma-1) virtio is now ready for processing.
00:16:49.284  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:49.284  I0000 00:00:1732094024.239999 1828046 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:49.284  I0000 00:00:1732094024.241760 1828046 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:49.284  I0000 00:00:1732094024.243414 1828060 subchannel.cc:806] subchannel 0x55b2f27f8180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55b2f27051c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55b2f27a9460, grpc.internal.client_channel_call_destination=0x7f3ea1e14390, grpc.internal.event_engine=0x55b2f276b440, grpc.internal.security_connector=0x55b2f27dfd00, grpc.internal.subchannel_pool=0x55b2f27dfc10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55b2f24282f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:13:44.242914515+01:00"}), backing off for 1000 ms
00:16:49.284   10:13:44 sma.sma_vhost -- sma/vhost_blk.sh@121 -- # tmp0=virtio_blk:sma-0
00:16:49.284    10:13:44 sma.sma_vhost -- sma/vhost_blk.sh@122 -- # create_device 1 6ceebb8f-2b48-4b5b-a1ee-34d5e5259aa1
00:16:49.284    10:13:44 sma.sma_vhost -- sma/vhost_blk.sh@122 -- # jq -r .handle
00:16:49.284    10:13:44 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:49.284     10:13:44 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 6ceebb8f-2b48-4b5b-a1ee-34d5e5259aa1
00:16:49.284     10:13:44 sma.sma_vhost -- sma/common.sh@20 -- # python
00:16:49.542  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:49.542  I0000 00:00:1732094024.589995 1828083 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:49.542  I0000 00:00:1732094024.591894 1828083 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:49.542  I0000 00:00:1732094024.593527 1828086 subchannel.cc:806] subchannel 0x5623c2b5c180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5623c2a691c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5623c2b0d460, grpc.internal.client_channel_call_destination=0x7f0da76cc390, grpc.internal.event_engine=0x5623c2acf440, grpc.internal.security_connector=0x5623c29c5650, grpc.internal.subchannel_pool=0x5623c2b43c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5623c278c2f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:13:44.592981467+01:00"}), backing off for 1000 ms
00:16:49.800   10:13:44 sma.sma_vhost -- sma/vhost_blk.sh@122 -- # tmp1=virtio_blk:sma-1
00:16:49.800   10:13:44 sma.sma_vhost -- sma/vhost_blk.sh@125 -- # NOT create_device 1 822ce54e-4577-464b-a58e-36213d6b54db
00:16:49.800   10:13:44 sma.sma_vhost -- sma/vhost_blk.sh@125 -- # jq -r .handle
00:16:49.800   10:13:44 sma.sma_vhost -- common/autotest_common.sh@652 -- # local es=0
00:16:49.800   10:13:44 sma.sma_vhost -- common/autotest_common.sh@654 -- # valid_exec_arg create_device 1 822ce54e-4577-464b-a58e-36213d6b54db
00:16:49.800   10:13:44 sma.sma_vhost -- common/autotest_common.sh@640 -- # local arg=create_device
00:16:49.800   10:13:44 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:49.800    10:13:44 sma.sma_vhost -- common/autotest_common.sh@644 -- # type -t create_device
00:16:49.800   10:13:44 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:49.800   10:13:44 sma.sma_vhost -- common/autotest_common.sh@655 -- # create_device 1 822ce54e-4577-464b-a58e-36213d6b54db
00:16:49.800   10:13:44 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:49.800    10:13:44 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 822ce54e-4577-464b-a58e-36213d6b54db
00:16:49.800    10:13:44 sma.sma_vhost -- sma/common.sh@20 -- # python
00:16:50.058  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:50.058  I0000 00:00:1732094024.966375 1828109 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:50.058  I0000 00:00:1732094024.968227 1828109 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:50.058  I0000 00:00:1732094024.969941 1828135 subchannel.cc:806] subchannel 0x562ba27a0180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x562ba26ad1c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x562ba2751460, grpc.internal.client_channel_call_destination=0x7f73fd03e390, grpc.internal.event_engine=0x562ba2713440, grpc.internal.security_connector=0x562ba2609650, grpc.internal.subchannel_pool=0x562ba2787c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x562ba23d02f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:13:44.969415434+01:00"}), backing off for 999 ms
00:16:50.058  Traceback (most recent call last):
00:16:50.058    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:16:50.058      main(sys.argv[1:])
00:16:50.058    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:16:50.058      result = client.call(request['method'], request.get('params', {}))
00:16:50.058               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:50.058    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:16:50.058      response = func(request=json_format.ParseDict(params, input()))
00:16:50.058                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:50.058    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:16:50.058      return _end_unary_response_blocking(state, call, False, None)
00:16:50.058             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:50.058    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:16:50.058      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:16:50.058      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:50.058  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:16:50.058  	status = StatusCode.INTERNAL
00:16:50.058  	details = "Failed to create vhost device"
00:16:50.058  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-11-20T10:13:45.01883211+01:00", grpc_status:13, grpc_message:"Failed to create vhost device"}"
00:16:50.058  >
00:16:50.058   10:13:45 sma.sma_vhost -- common/autotest_common.sh@655 -- # es=1
00:16:50.058   10:13:45 sma.sma_vhost -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:50.058   10:13:45 sma.sma_vhost -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:50.058   10:13:45 sma.sma_vhost -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:50.058    10:13:45 sma.sma_vhost -- sma/vhost_blk.sh@128 -- # vm_exec 0 'lsblk | grep -E "^vd." | wc -l'
00:16:50.058    10:13:45 sma.sma_vhost -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:16:50.058    10:13:45 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:16:50.058    10:13:45 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:16:50.058    10:13:45 sma.sma_vhost -- vhost/common.sh@338 -- # local vm_num=0
00:16:50.058    10:13:45 sma.sma_vhost -- vhost/common.sh@339 -- # shift
00:16:50.058     10:13:45 sma.sma_vhost -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:16:50.058     10:13:45 sma.sma_vhost -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:16:50.059     10:13:45 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:16:50.059     10:13:45 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:16:50.059     10:13:45 sma.sma_vhost -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:16:50.059     10:13:45 sma.sma_vhost -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:16:50.059    10:13:45 sma.sma_vhost -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'lsblk | grep -E "^vd." | wc -l'
00:16:50.059  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:16:50.317   10:13:45 sma.sma_vhost -- sma/vhost_blk.sh@128 -- # [[ 2 -eq 2 ]]
00:16:50.317    10:13:45 sma.sma_vhost -- sma/vhost_blk.sh@130 -- # rpc_cmd vhost_get_controllers
00:16:50.317    10:13:45 sma.sma_vhost -- sma/vhost_blk.sh@130 -- # jq -r '. | length'
00:16:50.317    10:13:45 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:50.317    10:13:45 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:50.317    10:13:45 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:50.317   10:13:45 sma.sma_vhost -- sma/vhost_blk.sh@130 -- # [[ 2 -eq 2 ]]
00:16:50.317   10:13:45 sma.sma_vhost -- sma/vhost_blk.sh@131 -- # [[ virtio_blk:sma-0 == \v\i\r\t\i\o\_\b\l\k\:\s\m\a\-\0 ]]
00:16:50.317   10:13:45 sma.sma_vhost -- sma/vhost_blk.sh@132 -- # [[ virtio_blk:sma-1 == \v\i\r\t\i\o\_\b\l\k\:\s\m\a\-\1 ]]
00:16:50.317   10:13:45 sma.sma_vhost -- sma/vhost_blk.sh@135 -- # delete_device virtio_blk:sma-0
00:16:50.317   10:13:45 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:50.577  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:50.577  I0000 00:00:1732094025.505093 1828270 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:50.577  I0000 00:00:1732094025.506948 1828270 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:50.577  I0000 00:00:1732094025.508577 1828275 subchannel.cc:806] subchannel 0x55efdda9e180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55efdd9ab1c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55efdda4f460, grpc.internal.client_channel_call_destination=0x7fb35b976390, grpc.internal.event_engine=0x55efdda11440, grpc.internal.security_connector=0x55efdda85d00, grpc.internal.subchannel_pool=0x55efdda85c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55efdd6ce2f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:13:45.508012918+01:00"}), backing off for 1000 ms
00:16:51.147  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:16:51.147  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000000):
00:16:51.147  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 1
00:16:51.147  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:16:51.147  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:16:51.147  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 0
00:16:51.147  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:16:51.147  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:16:51.147  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:16:51.147  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:16:51.147  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 0
00:16:51.147  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:16:51.147  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 1
00:16:51.147  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:16:51.147  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 file:49
00:16:51.147  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:16:51.147  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 file:1
00:16:51.406  VHOST_CONFIG: (/var/tmp/sma-0) vhost peer closed
00:16:51.406  {}
00:16:51.406   10:13:46 sma.sma_vhost -- sma/vhost_blk.sh@136 -- # NOT rpc_cmd vhost_get_controllers -n sma-0
00:16:51.406   10:13:46 sma.sma_vhost -- common/autotest_common.sh@652 -- # local es=0
00:16:51.406   10:13:46 sma.sma_vhost -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd vhost_get_controllers -n sma-0
00:16:51.406   10:13:46 sma.sma_vhost -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:16:51.406   10:13:46 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:51.406    10:13:46 sma.sma_vhost -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:16:51.406   10:13:46 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:51.406   10:13:46 sma.sma_vhost -- common/autotest_common.sh@655 -- # rpc_cmd vhost_get_controllers -n sma-0
00:16:51.406   10:13:46 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:51.406   10:13:46 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:51.406  request:
00:16:51.406  {
00:16:51.406  "name": "sma-0",
00:16:51.406  "method": "vhost_get_controllers",
00:16:51.406  "req_id": 1
00:16:51.406  }
00:16:51.406  Got JSON-RPC error response
00:16:51.406  response:
00:16:51.406  {
00:16:51.406  "code": -32603,
00:16:51.406  "message": "No such device"
00:16:51.406  }
00:16:51.406   10:13:46 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:16:51.406   10:13:46 sma.sma_vhost -- common/autotest_common.sh@655 -- # es=1
00:16:51.406   10:13:46 sma.sma_vhost -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:51.406   10:13:46 sma.sma_vhost -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:51.406   10:13:46 sma.sma_vhost -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:51.406    10:13:46 sma.sma_vhost -- sma/vhost_blk.sh@137 -- # rpc_cmd vhost_get_controllers
00:16:51.406    10:13:46 sma.sma_vhost -- sma/vhost_blk.sh@137 -- # jq -r '. | length'
00:16:51.406    10:13:46 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:51.406    10:13:46 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:51.406    10:13:46 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:51.406   10:13:46 sma.sma_vhost -- sma/vhost_blk.sh@137 -- # [[ 1 -eq 1 ]]
00:16:51.406   10:13:46 sma.sma_vhost -- sma/vhost_blk.sh@139 -- # delete_device virtio_blk:sma-1
00:16:51.406   10:13:46 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:51.665  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:51.665  I0000 00:00:1732094026.630919 1828427 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:51.665  I0000 00:00:1732094026.632586 1828427 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:51.665  I0000 00:00:1732094026.634083 1828435 subchannel.cc:806] subchannel 0x557eba7da180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x557eba6e71c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x557eba78b460, grpc.internal.client_channel_call_destination=0x7fdb0092a390, grpc.internal.event_engine=0x557eba74d440, grpc.internal.security_connector=0x557eba7c1d00, grpc.internal.subchannel_pool=0x557eba7c1c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x557eba40a2f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:13:46.633596407+01:00"}), backing off for 999 ms
00:16:51.665  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_STATUS
00:16:51.665  VHOST_CONFIG: (/var/tmp/sma-1) new device status(0x00000000):
00:16:51.665  VHOST_CONFIG: (/var/tmp/sma-1) 	-RESET: 1
00:16:51.665  VHOST_CONFIG: (/var/tmp/sma-1) 	-ACKNOWLEDGE: 0
00:16:51.665  VHOST_CONFIG: (/var/tmp/sma-1) 	-DRIVER: 0
00:16:51.665  VHOST_CONFIG: (/var/tmp/sma-1) 	-FEATURES_OK: 0
00:16:51.665  VHOST_CONFIG: (/var/tmp/sma-1) 	-DRIVER_OK: 0
00:16:51.665  VHOST_CONFIG: (/var/tmp/sma-1) 	-DEVICE_NEED_RESET: 0
00:16:51.665  VHOST_CONFIG: (/var/tmp/sma-1) 	-FAILED: 0
00:16:51.665  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ENABLE
00:16:51.665  VHOST_CONFIG: (/var/tmp/sma-1) set queue enable: 0 to qp idx: 0
00:16:51.665  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ENABLE
00:16:51.665  VHOST_CONFIG: (/var/tmp/sma-1) set queue enable: 0 to qp idx: 1
00:16:51.665  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_VRING_BASE
00:16:51.665  VHOST_CONFIG: (/var/tmp/sma-1) vring base idx:0 file:25
00:16:51.665  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_VRING_BASE
00:16:51.665  VHOST_CONFIG: (/var/tmp/sma-1) vring base idx:1 file:25
00:16:51.665  VHOST_CONFIG: (/var/tmp/sma-1) vhost peer closed
00:16:51.665  {}
00:16:51.924   10:13:46 sma.sma_vhost -- sma/vhost_blk.sh@140 -- # NOT rpc_cmd vhost_get_controllers -n sma-1
00:16:51.924   10:13:46 sma.sma_vhost -- common/autotest_common.sh@652 -- # local es=0
00:16:51.924   10:13:46 sma.sma_vhost -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd vhost_get_controllers -n sma-1
00:16:51.924   10:13:46 sma.sma_vhost -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:16:51.924   10:13:46 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:51.924    10:13:46 sma.sma_vhost -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:16:51.924   10:13:46 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:51.924   10:13:46 sma.sma_vhost -- common/autotest_common.sh@655 -- # rpc_cmd vhost_get_controllers -n sma-1
00:16:51.924   10:13:46 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:51.924   10:13:46 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:51.924  request:
00:16:51.924  {
00:16:51.924  "name": "sma-1",
00:16:51.924  "method": "vhost_get_controllers",
00:16:51.924  "req_id": 1
00:16:51.924  }
00:16:51.924  Got JSON-RPC error response
00:16:51.924  response:
00:16:51.924  {
00:16:51.924  "code": -32603,
00:16:51.924  "message": "No such device"
00:16:51.924  }
00:16:51.924   10:13:46 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:16:51.924   10:13:46 sma.sma_vhost -- common/autotest_common.sh@655 -- # es=1
00:16:51.924   10:13:46 sma.sma_vhost -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:51.924   10:13:46 sma.sma_vhost -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:51.924   10:13:46 sma.sma_vhost -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:51.924    10:13:46 sma.sma_vhost -- sma/vhost_blk.sh@141 -- # rpc_cmd vhost_get_controllers
00:16:51.924    10:13:46 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:51.924    10:13:46 sma.sma_vhost -- sma/vhost_blk.sh@141 -- # jq -r '. | length'
00:16:51.924    10:13:46 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:51.924    10:13:46 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:51.924   10:13:46 sma.sma_vhost -- sma/vhost_blk.sh@141 -- # [[ 0 -eq 0 ]]
00:16:51.924   10:13:46 sma.sma_vhost -- sma/vhost_blk.sh@144 -- # delete_device virtio_blk:sma-0
00:16:51.924   10:13:46 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:52.182  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:52.182  I0000 00:00:1732094027.081256 1828459 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:52.182  I0000 00:00:1732094027.083089 1828459 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:52.182  I0000 00:00:1732094027.084701 1828467 subchannel.cc:806] subchannel 0x55cc0b57d180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55cc0b48a1c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55cc0b52e460, grpc.internal.client_channel_call_destination=0x7fe1c91e1390, grpc.internal.event_engine=0x55cc0b4f0440, grpc.internal.security_connector=0x55cc0b564d00, grpc.internal.subchannel_pool=0x55cc0b564c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55cc0b1ad2f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:13:47.08419897+01:00"}), backing off for 1000 ms
00:16:52.182  {}
00:16:52.182   10:13:47 sma.sma_vhost -- sma/vhost_blk.sh@145 -- # delete_device virtio_blk:sma-1
00:16:52.182   10:13:47 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:52.442  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:52.442  I0000 00:00:1732094027.369889 1828487 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:52.442  I0000 00:00:1732094027.371750 1828487 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:52.442  I0000 00:00:1732094027.373362 1828606 subchannel.cc:806] subchannel 0x5569fe6dc180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5569fe5e91c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5569fe68d460, grpc.internal.client_channel_call_destination=0x7f158257b390, grpc.internal.event_engine=0x5569fe64f440, grpc.internal.security_connector=0x5569fe6c3d00, grpc.internal.subchannel_pool=0x5569fe6c3c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5569fe30c2f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:13:47.372782762+01:00"}), backing off for 1000 ms
00:16:52.442  {}
00:16:52.442    10:13:47 sma.sma_vhost -- sma/vhost_blk.sh@148 -- # vm_exec 0 'lsblk | grep -E "^vd." | wc -l'
00:16:52.442    10:13:47 sma.sma_vhost -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:16:52.442    10:13:47 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:16:52.442    10:13:47 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:16:52.442    10:13:47 sma.sma_vhost -- vhost/common.sh@338 -- # local vm_num=0
00:16:52.442    10:13:47 sma.sma_vhost -- vhost/common.sh@339 -- # shift
00:16:52.442     10:13:47 sma.sma_vhost -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:16:52.442     10:13:47 sma.sma_vhost -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:16:52.442     10:13:47 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:16:52.442     10:13:47 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:16:52.442     10:13:47 sma.sma_vhost -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:16:52.442     10:13:47 sma.sma_vhost -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:16:52.442    10:13:47 sma.sma_vhost -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'lsblk | grep -E "^vd." | wc -l'
00:16:52.442  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:16:52.701   10:13:47 sma.sma_vhost -- sma/vhost_blk.sh@148 -- # [[ 0 -eq 0 ]]
00:16:52.701   10:13:47 sma.sma_vhost -- sma/vhost_blk.sh@150 -- # devids=()
00:16:52.701    10:13:47 sma.sma_vhost -- sma/vhost_blk.sh@153 -- # rpc_cmd bdev_get_bdevs -b null0
00:16:52.701    10:13:47 sma.sma_vhost -- sma/vhost_blk.sh@153 -- # jq -r '.[].uuid'
00:16:52.701    10:13:47 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:52.701    10:13:47 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:52.701    10:13:47 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:52.701   10:13:47 sma.sma_vhost -- sma/vhost_blk.sh@153 -- # uuid=822ce54e-4577-464b-a58e-36213d6b54db
00:16:52.701    10:13:47 sma.sma_vhost -- sma/vhost_blk.sh@154 -- # create_device 0 822ce54e-4577-464b-a58e-36213d6b54db
00:16:52.701    10:13:47 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:52.701    10:13:47 sma.sma_vhost -- sma/vhost_blk.sh@154 -- # jq -r .handle
00:16:52.701     10:13:47 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 822ce54e-4577-464b-a58e-36213d6b54db
00:16:52.701     10:13:47 sma.sma_vhost -- sma/common.sh@20 -- # python
00:16:52.961  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:52.961  I0000 00:00:1732094027.904766 1828642 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:52.961  I0000 00:00:1732094027.906679 1828642 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:52.961  I0000 00:00:1732094027.908316 1828645 subchannel.cc:806] subchannel 0x56296d838180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x56296d7451c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x56296d7e9460, grpc.internal.client_channel_call_destination=0x7fea78c84390, grpc.internal.event_engine=0x56296d7ab440, grpc.internal.security_connector=0x56296d81fd00, grpc.internal.subchannel_pool=0x56296d81fc10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x56296d4682f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:13:47.907781756+01:00"}), backing off for 1000 ms
00:16:52.961  VHOST_CONFIG: (/var/tmp/sma-0) vhost-user server: socket created, fd: 228
00:16:52.961  VHOST_CONFIG: (/var/tmp/sma-0) binding succeeded
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) new vhost user connection is 59
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) new device, handle is 0
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_PROTOCOL_FEATURES
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_PROTOCOL_FEATURES
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Vhost-user protocol features: 0x11ebf
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_QUEUE_NUM
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_BACKEND_REQ_FD
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_OWNER
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:232
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:233
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_CONFIG
00:16:53.897   10:13:48 sma.sma_vhost -- sma/vhost_blk.sh@154 -- # devids[0]=virtio_blk:sma-0
00:16:53.897    10:13:48 sma.sma_vhost -- sma/vhost_blk.sh@155 -- # rpc_cmd bdev_get_bdevs -b null1
00:16:53.897    10:13:48 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:53.897    10:13:48 sma.sma_vhost -- sma/vhost_blk.sh@155 -- # jq -r '.[].uuid'
00:16:53.897    10:13:48 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:53.897    10:13:48 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150005446
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000008):
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_INFLIGHT_FD
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd num_queues: 2
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd queue_size: 128
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_size: 4224
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_offset: 0
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) send inflight fd: 58
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_INFLIGHT_FD
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_size: 4224
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_offset: 0
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd num_queues: 2
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd queue_size: 128
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd fd: 234
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd pervq_inflight_size: 2112
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:58
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:232
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150005446
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_MEM_TABLE
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) guest memory region size: 0x40000000
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest physical addr: 0x0
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest virtual  addr: 0x7fc977e00000
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) 	 host  virtual  addr: 0x7f2ebb600000
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap addr : 0x7f2ebb600000
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap size : 0x40000000
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap align: 0x200000
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap off  : 0x0
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:16:53.897  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:16:53.898  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 last_used_idx:0 last_avail_idx:0.
00:16:53.898  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:16:53.898  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:16:53.898  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:0 file:235
00:16:53.898  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:16:53.898  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:16:53.898  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 last_used_idx:0 last_avail_idx:0.
00:16:53.898  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:16:53.898  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:16:53.898  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:1 file:236
00:16:53.898  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:16:53.898  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 0
00:16:53.898  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:16:53.898  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 1
00:16:53.898  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:16:53.898  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:16:53.898  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x0000000f):
00:16:53.898  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:16:53.898  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 1
00:16:53.898  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 1
00:16:53.898  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:16:53.898  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 1
00:16:53.898  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:16:53.898  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:16:53.898  VHOST_CONFIG: (/var/tmp/sma-0) virtio is now ready for processing.
00:16:53.898   10:13:48 sma.sma_vhost -- sma/vhost_blk.sh@155 -- # uuid=6ceebb8f-2b48-4b5b-a1ee-34d5e5259aa1
00:16:53.898    10:13:48 sma.sma_vhost -- sma/vhost_blk.sh@156 -- # create_device 32 6ceebb8f-2b48-4b5b-a1ee-34d5e5259aa1
00:16:53.898    10:13:48 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:53.898    10:13:48 sma.sma_vhost -- sma/vhost_blk.sh@156 -- # jq -r .handle
00:16:53.898     10:13:48 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 6ceebb8f-2b48-4b5b-a1ee-34d5e5259aa1
00:16:53.898     10:13:48 sma.sma_vhost -- sma/common.sh@20 -- # python
00:16:54.157  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:54.157  I0000 00:00:1732094029.105339 1828805 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:54.157  I0000 00:00:1732094029.107250 1828805 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:54.157  I0000 00:00:1732094029.108877 1828814 subchannel.cc:806] subchannel 0x5558650d1180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x555864fde1c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x555865082460, grpc.internal.client_channel_call_destination=0x7fe9d00fe390, grpc.internal.event_engine=0x555865044440, grpc.internal.security_connector=0x555864f3a650, grpc.internal.subchannel_pool=0x5558650b8c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x555864d012f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:13:49.108325571+01:00"}), backing off for 1000 ms
00:16:54.157  VHOST_CONFIG: (/var/tmp/sma-32) vhost-user server: socket created, fd: 239
00:16:54.157  VHOST_CONFIG: (/var/tmp/sma-32) binding succeeded
00:16:54.724  VHOST_CONFIG: (/var/tmp/sma-32) new vhost user connection is 237
00:16:54.724  VHOST_CONFIG: (/var/tmp/sma-32) new device, handle is 1
00:16:54.724  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_FEATURES
00:16:54.724  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_PROTOCOL_FEATURES
00:16:54.724  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_PROTOCOL_FEATURES
00:16:54.724  VHOST_CONFIG: (/var/tmp/sma-32) negotiated Vhost-user protocol features: 0x11ebf
00:16:54.724  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_QUEUE_NUM
00:16:54.724  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_BACKEND_REQ_FD
00:16:54.724  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_OWNER
00:16:54.724  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_FEATURES
00:16:54.724  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_CALL
00:16:54.724  VHOST_CONFIG: (/var/tmp/sma-32) vring call idx:0 file:241
00:16:54.724  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ERR
00:16:54.724  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_CALL
00:16:54.724  VHOST_CONFIG: (/var/tmp/sma-32) vring call idx:1 file:242
00:16:54.724  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ERR
00:16:54.724  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_CONFIG
00:16:54.983   10:13:49 sma.sma_vhost -- sma/vhost_blk.sh@156 -- # devids[1]=virtio_blk:sma-32
00:16:54.983    10:13:49 sma.sma_vhost -- sma/vhost_blk.sh@158 -- # vm_exec 0 'lsblk | grep -E "^vd." | wc -l'
00:16:54.983    10:13:49 sma.sma_vhost -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:16:54.983    10:13:49 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:16:54.983    10:13:49 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:16:54.983    10:13:49 sma.sma_vhost -- vhost/common.sh@338 -- # local vm_num=0
00:16:54.983    10:13:49 sma.sma_vhost -- vhost/common.sh@339 -- # shift
00:16:54.983     10:13:49 sma.sma_vhost -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:16:54.983     10:13:49 sma.sma_vhost -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:16:54.983     10:13:49 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:16:54.983     10:13:49 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:16:54.983     10:13:49 sma.sma_vhost -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:16:54.983     10:13:49 sma.sma_vhost -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:16:54.983    10:13:49 sma.sma_vhost -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'lsblk | grep -E "^vd." | wc -l'
00:16:54.983  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:16:54.983  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_FEATURES
00:16:54.983  VHOST_CONFIG: (/var/tmp/sma-32) negotiated Virtio features: 0x150005446
00:16:54.983  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_STATUS
00:16:54.983  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_STATUS
00:16:54.983  VHOST_CONFIG: (/var/tmp/sma-32) new device status(0x00000008):
00:16:54.983  VHOST_CONFIG: (/var/tmp/sma-32) 	-RESET: 0
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) 	-ACKNOWLEDGE: 0
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) 	-DRIVER: 0
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) 	-FEATURES_OK: 1
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) 	-DRIVER_OK: 0
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) 	-DEVICE_NEED_RESET: 0
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) 	-FAILED: 0
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_INFLIGHT_FD
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) get_inflight_fd num_queues: 2
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) get_inflight_fd queue_size: 128
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) send inflight mmap_size: 4224
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) send inflight mmap_offset: 0
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) send inflight fd: 238
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_INFLIGHT_FD
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) set_inflight_fd mmap_size: 4224
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) set_inflight_fd mmap_offset: 0
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) set_inflight_fd num_queues: 2
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) set_inflight_fd queue_size: 128
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) set_inflight_fd fd: 243
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) set_inflight_fd pervq_inflight_size: 2112
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_CALL
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) vring call idx:0 file:238
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_CALL
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) vring call idx:1 file:241
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_FEATURES
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) negotiated Virtio features: 0x150005446
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_STATUS
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_MEM_TABLE
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) guest memory region size: 0x40000000
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) 	 guest physical addr: 0x0
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) 	 guest virtual  addr: 0x7fc977e00000
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) 	 host  virtual  addr: 0x7f2e7b600000
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) 	 mmap addr : 0x7f2e7b600000
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) 	 mmap size : 0x40000000
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) 	 mmap align: 0x200000
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) 	 mmap off  : 0x0
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_NUM
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_BASE
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) vring base idx:0 last_used_idx:0 last_avail_idx:0.
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ADDR
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_KICK
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) vring kick idx:0 file:244
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_NUM
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_BASE
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) vring base idx:1 last_used_idx:0 last_avail_idx:0.
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ADDR
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_KICK
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) vring kick idx:1 file:245
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ENABLE
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) set queue enable: 1 to qp idx: 0
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ENABLE
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) set queue enable: 1 to qp idx: 1
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_STATUS
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_STATUS
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) new device status(0x0000000f):
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) 	-RESET: 0
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) 	-ACKNOWLEDGE: 1
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) 	-DRIVER: 1
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) 	-FEATURES_OK: 1
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) 	-DRIVER_OK: 1
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) 	-DEVICE_NEED_RESET: 0
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) 	-FAILED: 0
00:16:54.984  VHOST_CONFIG: (/var/tmp/sma-32) virtio is now ready for processing.
00:16:54.984   10:13:50 sma.sma_vhost -- sma/vhost_blk.sh@158 -- # [[ 2 -eq 2 ]]
00:16:54.984   10:13:50 sma.sma_vhost -- sma/vhost_blk.sh@161 -- # for id in "${devids[@]}"
00:16:54.984   10:13:50 sma.sma_vhost -- sma/vhost_blk.sh@162 -- # delete_device virtio_blk:sma-0
00:16:54.984   10:13:50 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:55.244  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:55.244  I0000 00:00:1732094030.309445 1828973 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:55.244  I0000 00:00:1732094030.311248 1828973 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:55.244  I0000 00:00:1732094030.312818 1828976 subchannel.cc:806] subchannel 0x5624ef56a180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5624ef4771c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5624ef51b460, grpc.internal.client_channel_call_destination=0x7f593b380390, grpc.internal.event_engine=0x5624ef4dd440, grpc.internal.security_connector=0x5624ef551d00, grpc.internal.subchannel_pool=0x5624ef551c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5624ef19a2f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:13:50.312260511+01:00"}), backing off for 1000 ms
00:16:55.244  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:16:55.244  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000000):
00:16:55.244  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 1
00:16:55.244  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:16:55.244  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:16:55.244  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 0
00:16:55.244  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:16:55.244  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:16:55.244  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:16:55.244  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:16:55.244  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 0
00:16:55.244  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:16:55.244  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 1
00:16:55.244  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:16:55.504  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 file:50
00:16:55.504  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:16:55.504  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 file:0
00:16:55.762  VHOST_CONFIG: (/var/tmp/sma-0) vhost peer closed
00:16:55.762  {}
00:16:55.762   10:13:50 sma.sma_vhost -- sma/vhost_blk.sh@161 -- # for id in "${devids[@]}"
00:16:55.762   10:13:50 sma.sma_vhost -- sma/vhost_blk.sh@162 -- # delete_device virtio_blk:sma-32
00:16:55.762   10:13:50 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:56.020  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:56.020  I0000 00:00:1732094030.913753 1829017 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:56.020  I0000 00:00:1732094030.915554 1829017 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:56.020  I0000 00:00:1732094030.917011 1829115 subchannel.cc:806] subchannel 0x55f5c94ef180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55f5c93fc1c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55f5c94a0460, grpc.internal.client_channel_call_destination=0x7f4fc0d78390, grpc.internal.event_engine=0x55f5c9462440, grpc.internal.security_connector=0x55f5c94d6d00, grpc.internal.subchannel_pool=0x55f5c94d6c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55f5c911f2f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:13:50.916532225+01:00"}), backing off for 999 ms
00:16:56.020  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_STATUS
00:16:56.020  VHOST_CONFIG: (/var/tmp/sma-32) new device status(0x00000000):
00:16:56.020  VHOST_CONFIG: (/var/tmp/sma-32) 	-RESET: 1
00:16:56.020  VHOST_CONFIG: (/var/tmp/sma-32) 	-ACKNOWLEDGE: 0
00:16:56.020  VHOST_CONFIG: (/var/tmp/sma-32) 	-DRIVER: 0
00:16:56.020  VHOST_CONFIG: (/var/tmp/sma-32) 	-FEATURES_OK: 0
00:16:56.020  VHOST_CONFIG: (/var/tmp/sma-32) 	-DRIVER_OK: 0
00:16:56.020  VHOST_CONFIG: (/var/tmp/sma-32) 	-DEVICE_NEED_RESET: 0
00:16:56.020  VHOST_CONFIG: (/var/tmp/sma-32) 	-FAILED: 0
00:16:56.020  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ENABLE
00:16:56.020  VHOST_CONFIG: (/var/tmp/sma-32) set queue enable: 0 to qp idx: 0
00:16:56.020  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ENABLE
00:16:56.020  VHOST_CONFIG: (/var/tmp/sma-32) set queue enable: 0 to qp idx: 1
00:16:56.020  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_VRING_BASE
00:16:56.020  VHOST_CONFIG: (/var/tmp/sma-32) vring base idx:0 file:0
00:16:56.020  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_VRING_BASE
00:16:56.020  VHOST_CONFIG: (/var/tmp/sma-32) vring base idx:1 file:50
00:16:56.020  VHOST_CONFIG: (/var/tmp/sma-32) vhost peer closed
00:16:56.020  {}
00:16:56.020    10:13:51 sma.sma_vhost -- sma/vhost_blk.sh@166 -- # vm_exec 0 'lsblk | grep -E "^vd." | wc -l'
00:16:56.020    10:13:51 sma.sma_vhost -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:16:56.020    10:13:51 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:16:56.020    10:13:51 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:16:56.021    10:13:51 sma.sma_vhost -- vhost/common.sh@338 -- # local vm_num=0
00:16:56.021    10:13:51 sma.sma_vhost -- vhost/common.sh@339 -- # shift
00:16:56.021     10:13:51 sma.sma_vhost -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:16:56.021     10:13:51 sma.sma_vhost -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:16:56.021     10:13:51 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:16:56.021     10:13:51 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:16:56.021     10:13:51 sma.sma_vhost -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:16:56.021     10:13:51 sma.sma_vhost -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:16:56.021    10:13:51 sma.sma_vhost -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'lsblk | grep -E "^vd." | wc -l'
00:16:56.021  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:16:56.987   10:13:52 sma.sma_vhost -- sma/vhost_blk.sh@166 -- # [[ 0 -eq 0 ]]
00:16:56.987   10:13:52 sma.sma_vhost -- sma/vhost_blk.sh@168 -- # key0=1234567890abcdef1234567890abcdef
00:16:56.987   10:13:52 sma.sma_vhost -- sma/vhost_blk.sh@169 -- # rpc_cmd bdev_malloc_create -b malloc0 32 4096
00:16:56.987   10:13:52 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:56.987   10:13:52 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:56.987  malloc0
00:16:56.987   10:13:52 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:56.987    10:13:52 sma.sma_vhost -- sma/vhost_blk.sh@170 -- # rpc_cmd bdev_get_bdevs -b malloc0
00:16:56.987    10:13:52 sma.sma_vhost -- sma/vhost_blk.sh@170 -- # jq -r '.[].uuid'
00:16:56.987    10:13:52 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:56.987    10:13:52 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:57.247    10:13:52 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:57.247   10:13:52 sma.sma_vhost -- sma/vhost_blk.sh@170 -- # uuid=bac9a159-cb7c-47d2-9091-c22ef068bee9
00:16:57.247    10:13:52 sma.sma_vhost -- sma/vhost_blk.sh@192 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:57.247    10:13:52 sma.sma_vhost -- sma/vhost_blk.sh@210 -- # jq -r .handle
00:16:57.247     10:13:52 sma.sma_vhost -- sma/vhost_blk.sh@192 -- # uuid2base64 bac9a159-cb7c-47d2-9091-c22ef068bee9
00:16:57.247     10:13:52 sma.sma_vhost -- sma/common.sh@20 -- # python
00:16:57.247     10:13:52 sma.sma_vhost -- sma/vhost_blk.sh@192 -- # get_cipher AES_CBC
00:16:57.247     10:13:52 sma.sma_vhost -- sma/common.sh@27 -- # case "$1" in
00:16:57.247     10:13:52 sma.sma_vhost -- sma/common.sh@28 -- # echo 0
00:16:57.247     10:13:52 sma.sma_vhost -- sma/vhost_blk.sh@192 -- # format_key 1234567890abcdef1234567890abcdef
00:16:57.247     10:13:52 sma.sma_vhost -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/63
00:16:57.247      10:13:52 sma.sma_vhost -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:16:57.507  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:57.507  I0000 00:00:1732094032.436530 1829286 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:57.507  I0000 00:00:1732094032.438304 1829286 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:57.507  I0000 00:00:1732094032.439891 1829299 subchannel.cc:806] subchannel 0x5582ba42f180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5582ba33c1c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5582ba3e0460, grpc.internal.client_channel_call_destination=0x7fc21739e390, grpc.internal.event_engine=0x5582ba3a2440, grpc.internal.security_connector=0x5582ba416d00, grpc.internal.subchannel_pool=0x5582ba416c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5582ba05f2f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:13:52.439340843+01:00"}), backing off for 1000 ms
00:16:57.507  VHOST_CONFIG: (/var/tmp/sma-0) vhost-user server: socket created, fd: 248
00:16:57.507  VHOST_CONFIG: (/var/tmp/sma-0) binding succeeded
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) new vhost user connection is 60
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) new device, handle is 0
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_PROTOCOL_FEATURES
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_PROTOCOL_FEATURES
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Vhost-user protocol features: 0x11ebf
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_QUEUE_NUM
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_BACKEND_REQ_FD
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_OWNER
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:250
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:251
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_CONFIG
00:16:58.078   10:13:53 sma.sma_vhost -- sma/vhost_blk.sh@192 -- # devid0=virtio_blk:sma-0
00:16:58.078    10:13:53 sma.sma_vhost -- sma/vhost_blk.sh@194 -- # rpc_cmd vhost_get_controllers
00:16:58.078    10:13:53 sma.sma_vhost -- sma/vhost_blk.sh@194 -- # jq -r '. | length'
00:16:58.078    10:13:53 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:58.078    10:13:53 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:58.078    10:13:53 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150007646
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000008):
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_INFLIGHT_FD
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd num_queues: 2
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd queue_size: 128
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_size: 4224
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_offset: 0
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) send inflight fd: 59
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_INFLIGHT_FD
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_size: 4224
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_offset: 0
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd num_queues: 2
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd queue_size: 128
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd fd: 252
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd pervq_inflight_size: 2112
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:59
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:250
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150007646
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_MEM_TABLE
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) guest memory region size: 0x40000000
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest physical addr: 0x0
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest virtual  addr: 0x7fc977e00000
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) 	 host  virtual  addr: 0x7f2ebb600000
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap addr : 0x7f2ebb600000
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap size : 0x40000000
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap align: 0x200000
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap off  : 0x0
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 last_used_idx:0 last_avail_idx:0.
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:0 file:253
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:16:58.078   10:13:53 sma.sma_vhost -- sma/vhost_blk.sh@194 -- # [[ 1 -eq 1 ]]
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 last_used_idx:0 last_avail_idx:0.
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:1 file:254
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 0
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 1
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x0000000f):
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 1
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 1
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 1
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:16:58.078  VHOST_CONFIG: (/var/tmp/sma-0) virtio is now ready for processing.
00:16:58.078    10:13:53 sma.sma_vhost -- sma/vhost_blk.sh@195 -- # rpc_cmd vhost_get_controllers
00:16:58.078    10:13:53 sma.sma_vhost -- sma/vhost_blk.sh@195 -- # jq -r '.[].backend_specific.block.bdev'
00:16:58.078    10:13:53 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:58.078    10:13:53 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:58.078    10:13:53 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:58.078   10:13:53 sma.sma_vhost -- sma/vhost_blk.sh@195 -- # bdev=1f1989ca-3936-4c26-afed-cf3b37b0e359
00:16:58.078    10:13:53 sma.sma_vhost -- sma/vhost_blk.sh@197 -- # rpc_cmd bdev_get_bdevs
00:16:58.078    10:13:53 sma.sma_vhost -- sma/vhost_blk.sh@197 -- # jq -r '.[] | select(.product_name == "crypto")'
00:16:58.078    10:13:53 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:58.078    10:13:53 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:58.078    10:13:53 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:58.078   10:13:53 sma.sma_vhost -- sma/vhost_blk.sh@197 -- # crypto_bdev='{
00:16:58.078    "name": "1f1989ca-3936-4c26-afed-cf3b37b0e359",
00:16:58.078    "aliases": [
00:16:58.078      "5d65fa7e-1040-55d9-af8e-7636e6c252e8"
00:16:58.078    ],
00:16:58.078    "product_name": "crypto",
00:16:58.078    "block_size": 4096,
00:16:58.078    "num_blocks": 8192,
00:16:58.078    "uuid": "5d65fa7e-1040-55d9-af8e-7636e6c252e8",
00:16:58.078    "assigned_rate_limits": {
00:16:58.078      "rw_ios_per_sec": 0,
00:16:58.078      "rw_mbytes_per_sec": 0,
00:16:58.078      "r_mbytes_per_sec": 0,
00:16:58.078      "w_mbytes_per_sec": 0
00:16:58.078    },
00:16:58.078    "claimed": false,
00:16:58.078    "zoned": false,
00:16:58.078    "supported_io_types": {
00:16:58.078      "read": true,
00:16:58.078      "write": true,
00:16:58.078      "unmap": true,
00:16:58.078      "flush": true,
00:16:58.078      "reset": true,
00:16:58.078      "nvme_admin": false,
00:16:58.078      "nvme_io": false,
00:16:58.078      "nvme_io_md": false,
00:16:58.078      "write_zeroes": true,
00:16:58.078      "zcopy": false,
00:16:58.078      "get_zone_info": false,
00:16:58.078      "zone_management": false,
00:16:58.078      "zone_append": false,
00:16:58.078      "compare": false,
00:16:58.078      "compare_and_write": false,
00:16:58.078      "abort": false,
00:16:58.078      "seek_hole": false,
00:16:58.078      "seek_data": false,
00:16:58.078      "copy": false,
00:16:58.078      "nvme_iov_md": false
00:16:58.078    },
00:16:58.078    "memory_domains": [
00:16:58.078      {
00:16:58.078        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:16:58.078        "dma_device_type": 2
00:16:58.078      }
00:16:58.078    ],
00:16:58.078    "driver_specific": {
00:16:58.078      "crypto": {
00:16:58.078        "base_bdev_name": "malloc0",
00:16:58.078        "name": "1f1989ca-3936-4c26-afed-cf3b37b0e359",
00:16:58.078        "key_name": "1f1989ca-3936-4c26-afed-cf3b37b0e359_AES_CBC"
00:16:58.078      }
00:16:58.078    }
00:16:58.079  }'
00:16:58.079    10:13:53 sma.sma_vhost -- sma/vhost_blk.sh@198 -- # jq -r .driver_specific.crypto.name
00:16:58.337   10:13:53 sma.sma_vhost -- sma/vhost_blk.sh@198 -- # [[ 1f1989ca-3936-4c26-afed-cf3b37b0e359 == \1\f\1\9\8\9\c\a\-\3\9\3\6\-\4\c\2\6\-\a\f\e\d\-\c\f\3\b\3\7\b\0\e\3\5\9 ]]
00:16:58.337    10:13:53 sma.sma_vhost -- sma/vhost_blk.sh@199 -- # jq -r .driver_specific.crypto.key_name
00:16:58.337   10:13:53 sma.sma_vhost -- sma/vhost_blk.sh@199 -- # key_name=1f1989ca-3936-4c26-afed-cf3b37b0e359_AES_CBC
00:16:58.337    10:13:53 sma.sma_vhost -- sma/vhost_blk.sh@200 -- # rpc_cmd accel_crypto_keys_get -k 1f1989ca-3936-4c26-afed-cf3b37b0e359_AES_CBC
00:16:58.337    10:13:53 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:58.337    10:13:53 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:58.337    10:13:53 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:58.337   10:13:53 sma.sma_vhost -- sma/vhost_blk.sh@200 -- # key_obj='[
00:16:58.337  {
00:16:58.337  "name": "1f1989ca-3936-4c26-afed-cf3b37b0e359_AES_CBC",
00:16:58.337  "cipher": "AES_CBC",
00:16:58.337  "key": "1234567890abcdef1234567890abcdef"
00:16:58.337  }
00:16:58.337  ]'
00:16:58.337    10:13:53 sma.sma_vhost -- sma/vhost_blk.sh@201 -- # jq -r '.[0].key'
00:16:58.337   10:13:53 sma.sma_vhost -- sma/vhost_blk.sh@201 -- # [[ 1234567890abcdef1234567890abcdef == \1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f\1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f ]]
00:16:58.337    10:13:53 sma.sma_vhost -- sma/vhost_blk.sh@202 -- # jq -r '.[0].cipher'
00:16:58.337   10:13:53 sma.sma_vhost -- sma/vhost_blk.sh@202 -- # [[ AES_CBC == \A\E\S\_\C\B\C ]]
00:16:58.337   10:13:53 sma.sma_vhost -- sma/vhost_blk.sh@205 -- # delete_device virtio_blk:sma-0
00:16:58.337   10:13:53 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:58.596  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:58.596  I0000 00:00:1732094033.575432 1829470 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:58.596  I0000 00:00:1732094033.577289 1829470 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:58.596  I0000 00:00:1732094033.578895 1829472 subchannel.cc:806] subchannel 0x55672a3fa180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55672a3071c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55672a3ab460, grpc.internal.client_channel_call_destination=0x7f15e083b390, grpc.internal.event_engine=0x55672a36d440, grpc.internal.security_connector=0x55672a3e1d00, grpc.internal.subchannel_pool=0x55672a3e1c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55672a02a2f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:13:53.578344831+01:00"}), backing off for 1000 ms
00:16:58.596  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:16:58.596  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000000):
00:16:58.596  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 1
00:16:58.596  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:16:58.596  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:16:58.596  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 0
00:16:58.596  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:16:58.596  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:16:58.596  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:16:58.596  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:16:58.596  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 0
00:16:58.596  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:16:58.596  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 1
00:16:58.596  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:16:58.596  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 file:36
00:16:58.596  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:16:58.596  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 file:0
00:16:58.596  VHOST_CONFIG: (/var/tmp/sma-0) vhost peer closed
00:16:58.596  {}
00:16:58.855    10:13:53 sma.sma_vhost -- sma/vhost_blk.sh@206 -- # rpc_cmd bdev_get_bdevs
00:16:58.855    10:13:53 sma.sma_vhost -- sma/vhost_blk.sh@206 -- # jq -r '.[] | select(.product_name == "crypto")'
00:16:58.855    10:13:53 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:58.855    10:13:53 sma.sma_vhost -- sma/vhost_blk.sh@206 -- # jq -r length
00:16:58.855    10:13:53 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:58.855    10:13:53 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:58.855   10:13:53 sma.sma_vhost -- sma/vhost_blk.sh@206 -- # [[ '' -eq 0 ]]
00:16:58.855   10:13:53 sma.sma_vhost -- sma/vhost_blk.sh@209 -- # device_vhost=2
00:16:58.855    10:13:53 sma.sma_vhost -- sma/vhost_blk.sh@210 -- # rpc_cmd bdev_get_bdevs -b null0
00:16:58.855    10:13:53 sma.sma_vhost -- sma/vhost_blk.sh@210 -- # jq -r '.[].uuid'
00:16:58.855    10:13:53 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:58.855    10:13:53 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:58.855    10:13:53 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:58.855   10:13:53 sma.sma_vhost -- sma/vhost_blk.sh@210 -- # uuid=822ce54e-4577-464b-a58e-36213d6b54db
00:16:58.855    10:13:53 sma.sma_vhost -- sma/vhost_blk.sh@211 -- # create_device 0 822ce54e-4577-464b-a58e-36213d6b54db
00:16:58.855    10:13:53 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:58.855    10:13:53 sma.sma_vhost -- sma/vhost_blk.sh@211 -- # jq -r .handle
00:16:58.855     10:13:53 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 822ce54e-4577-464b-a58e-36213d6b54db
00:16:58.855     10:13:53 sma.sma_vhost -- sma/common.sh@20 -- # python
00:16:59.113  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:59.113  I0000 00:00:1732094034.103181 1829502 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:59.113  I0000 00:00:1732094034.105038 1829502 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:59.113  I0000 00:00:1732094034.106755 1829523 subchannel.cc:806] subchannel 0x55b5e5bb3180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55b5e5ac01c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55b5e5b64460, grpc.internal.client_channel_call_destination=0x7feb2465e390, grpc.internal.event_engine=0x55b5e5b26440, grpc.internal.security_connector=0x55b5e5b9ad00, grpc.internal.subchannel_pool=0x55b5e5b9ac10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55b5e57e32f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:13:54.106259862+01:00"}), backing off for 1000 ms
00:16:59.113  VHOST_CONFIG: (/var/tmp/sma-0) vhost-user server: socket created, fd: 248
00:16:59.113  VHOST_CONFIG: (/var/tmp/sma-0) binding succeeded
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) new vhost user connection is 59
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) new device, handle is 0
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_PROTOCOL_FEATURES
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_PROTOCOL_FEATURES
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Vhost-user protocol features: 0x11ebf
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_QUEUE_NUM
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_BACKEND_REQ_FD
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_OWNER
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:250
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:251
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_CONFIG
00:16:59.680   10:13:54 sma.sma_vhost -- sma/vhost_blk.sh@211 -- # device=virtio_blk:sma-0
00:16:59.680   10:13:54 sma.sma_vhost -- sma/vhost_blk.sh@214 -- # diff /dev/fd/62 /dev/fd/61
00:16:59.680    10:13:54 sma.sma_vhost -- sma/vhost_blk.sh@214 -- # jq --sort-keys
00:16:59.680    10:13:54 sma.sma_vhost -- sma/vhost_blk.sh@214 -- # get_qos_caps 2
00:16:59.680    10:13:54 sma.sma_vhost -- sma/common.sh@45 -- # local rootdir
00:16:59.680    10:13:54 sma.sma_vhost -- sma/vhost_blk.sh@214 -- # jq --sort-keys
00:16:59.680     10:13:54 sma.sma_vhost -- sma/common.sh@47 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:16:59.680    10:13:54 sma.sma_vhost -- sma/common.sh@47 -- # rootdir=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../..
00:16:59.680    10:13:54 sma.sma_vhost -- sma/common.sh@49 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150005446
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000008):
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_INFLIGHT_FD
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd num_queues: 2
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd queue_size: 128
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_size: 4224
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_offset: 0
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) send inflight fd: 60
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_INFLIGHT_FD
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_size: 4224
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_offset: 0
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd num_queues: 2
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd queue_size: 128
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd fd: 252
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd pervq_inflight_size: 2112
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:60
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:250
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150005446
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_MEM_TABLE
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) guest memory region size: 0x40000000
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest physical addr: 0x0
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest virtual  addr: 0x7fc977e00000
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) 	 host  virtual  addr: 0x7f2e7b400000
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap addr : 0x7f2e7b400000
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap size : 0x40000000
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap align: 0x200000
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap off  : 0x0
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 last_used_idx:0 last_avail_idx:0.
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:0 file:253
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 last_used_idx:0 last_avail_idx:0.
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:1 file:254
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 0
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 1
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x0000000f):
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 1
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 1
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 1
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:16:59.680  VHOST_CONFIG: (/var/tmp/sma-0) virtio is now ready for processing.
00:16:59.939  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:59.939  I0000 00:00:1732094034.954612 1829667 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:59.939  I0000 00:00:1732094034.956552 1829667 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:59.939  I0000 00:00:1732094034.958020 1829668 subchannel.cc:806] subchannel 0x564062d4d650 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x564062bf8520, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x564062b44060, grpc.internal.client_channel_call_destination=0x7fd932c4b390, grpc.internal.event_engine=0x564062c11e50, grpc.internal.security_connector=0x564062afacb0, grpc.internal.subchannel_pool=0x564062c29d10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5640629f1200, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:13:54.957520089+01:00"}), backing off for 999 ms
00:16:59.939   10:13:54 sma.sma_vhost -- sma/vhost_blk.sh@233 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:59.939    10:13:54 sma.sma_vhost -- sma/vhost_blk.sh@233 -- # uuid2base64 822ce54e-4577-464b-a58e-36213d6b54db
00:16:59.939    10:13:54 sma.sma_vhost -- sma/common.sh@20 -- # python
00:17:00.198  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:00.198  I0000 00:00:1732094035.266004 1829688 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:00.198  I0000 00:00:1732094035.267769 1829688 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:00.198  I0000 00:00:1732094035.269461 1829772 subchannel.cc:806] subchannel 0x5583a4b01180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5583a4a0e1c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5583a4ab2460, grpc.internal.client_channel_call_destination=0x7f2b05f1e390, grpc.internal.event_engine=0x5583a4a74440, grpc.internal.security_connector=0x5583a496a650, grpc.internal.subchannel_pool=0x5583a4ae8c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5583a47312f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:13:55.268941285+01:00"}), backing off for 1000 ms
00:17:00.198  {}
00:17:00.456   10:13:55 sma.sma_vhost -- sma/vhost_blk.sh@252 -- # diff /dev/fd/62 /dev/fd/61
00:17:00.456    10:13:55 sma.sma_vhost -- sma/vhost_blk.sh@252 -- # jq --sort-keys
00:17:00.456    10:13:55 sma.sma_vhost -- sma/vhost_blk.sh@252 -- # rpc_cmd bdev_get_bdevs -b 822ce54e-4577-464b-a58e-36213d6b54db
00:17:00.456    10:13:55 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:00.456    10:13:55 sma.sma_vhost -- sma/vhost_blk.sh@252 -- # jq --sort-keys '.[].assigned_rate_limits'
00:17:00.456    10:13:55 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:00.456    10:13:55 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:00.456   10:13:55 sma.sma_vhost -- sma/vhost_blk.sh@264 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:00.714  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:00.714  I0000 00:00:1732094035.614656 1829843 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:00.714  I0000 00:00:1732094035.616464 1829843 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:00.714  I0000 00:00:1732094035.618076 1829849 subchannel.cc:806] subchannel 0x556c7901c180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x556c78f291c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x556c78fcd460, grpc.internal.client_channel_call_destination=0x7effef652390, grpc.internal.event_engine=0x556c78f8f440, grpc.internal.security_connector=0x556c78e78da0, grpc.internal.subchannel_pool=0x556c79003c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x556c78c4c2f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:13:55.617606795+01:00"}), backing off for 999 ms
00:17:00.714  {}
00:17:00.714   10:13:55 sma.sma_vhost -- sma/vhost_blk.sh@283 -- # diff /dev/fd/62 /dev/fd/61
00:17:00.714    10:13:55 sma.sma_vhost -- sma/vhost_blk.sh@283 -- # rpc_cmd bdev_get_bdevs -b 822ce54e-4577-464b-a58e-36213d6b54db
00:17:00.714    10:13:55 sma.sma_vhost -- sma/vhost_blk.sh@283 -- # jq --sort-keys
00:17:00.714    10:13:55 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:00.714    10:13:55 sma.sma_vhost -- sma/vhost_blk.sh@283 -- # jq --sort-keys '.[].assigned_rate_limits'
00:17:00.714    10:13:55 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:00.714    10:13:55 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:00.714   10:13:55 sma.sma_vhost -- sma/vhost_blk.sh@295 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:00.714     10:13:55 sma.sma_vhost -- sma/vhost_blk.sh@295 -- # uuidgen
00:17:00.714    10:13:55 sma.sma_vhost -- sma/vhost_blk.sh@295 -- # uuid2base64 36e6cd6b-8350-4d6d-bf1c-43f738ecf1c8
00:17:00.714    10:13:55 sma.sma_vhost -- sma/common.sh@20 -- # python
00:17:00.714   10:13:55 sma.sma_vhost -- common/autotest_common.sh@652 -- # local es=0
00:17:00.714   10:13:55 sma.sma_vhost -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:00.714   10:13:55 sma.sma_vhost -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:00.714   10:13:55 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:00.714    10:13:55 sma.sma_vhost -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:00.714   10:13:55 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:00.714    10:13:55 sma.sma_vhost -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:00.714   10:13:55 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:00.714   10:13:55 sma.sma_vhost -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:00.714   10:13:55 sma.sma_vhost -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:17:00.714   10:13:55 sma.sma_vhost -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:00.973  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:00.973  I0000 00:00:1732094035.986895 1829885 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:00.973  I0000 00:00:1732094035.988781 1829885 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:00.973  I0000 00:00:1732094035.990411 1829886 subchannel.cc:806] subchannel 0x561506ebc180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x561506dc91c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x561506e6d460, grpc.internal.client_channel_call_destination=0x7fa69f001390, grpc.internal.event_engine=0x561506e2f440, grpc.internal.security_connector=0x561506d25650, grpc.internal.subchannel_pool=0x561506ea3c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x561506aec2f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:13:55.989906107+01:00"}), backing off for 1000 ms
00:17:00.973  [2024-11-20 10:13:56.026452] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 36e6cd6b-8350-4d6d-bf1c-43f738ecf1c8
00:17:00.973  Traceback (most recent call last):
00:17:00.973    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:00.973      main(sys.argv[1:])
00:17:00.973    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:00.973      result = client.call(request['method'], request.get('params', {}))
00:17:00.973               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:00.973    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:00.973      response = func(request=json_format.ParseDict(params, input()))
00:17:00.973                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:00.973    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:00.973      return _end_unary_response_blocking(state, call, False, None)
00:17:00.973             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:00.973    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:00.973      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:00.973      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:00.973  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:00.973  	status = StatusCode.INVALID_ARGUMENT
00:17:00.973  	details = "Specified volume is not attached to the device"
00:17:00.973  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Specified volume is not attached to the device", grpc_status:3, created_time:"2024-11-20T10:13:56.030835667+01:00"}"
00:17:00.973  >
00:17:00.973   10:13:56 sma.sma_vhost -- common/autotest_common.sh@655 -- # es=1
00:17:00.973   10:13:56 sma.sma_vhost -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:00.973   10:13:56 sma.sma_vhost -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:00.973   10:13:56 sma.sma_vhost -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:00.973   10:13:56 sma.sma_vhost -- sma/vhost_blk.sh@314 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:00.973    10:13:56 sma.sma_vhost -- sma/vhost_blk.sh@314 -- # base64
00:17:00.973   10:13:56 sma.sma_vhost -- common/autotest_common.sh@652 -- # local es=0
00:17:00.973   10:13:56 sma.sma_vhost -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:00.973   10:13:56 sma.sma_vhost -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:00.973   10:13:56 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:00.973    10:13:56 sma.sma_vhost -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:00.973   10:13:56 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:00.973    10:13:56 sma.sma_vhost -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:00.973   10:13:56 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:00.973   10:13:56 sma.sma_vhost -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:00.973   10:13:56 sma.sma_vhost -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:17:00.973   10:13:56 sma.sma_vhost -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:01.231  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:01.231  I0000 00:00:1732094036.289324 1829910 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:01.231  I0000 00:00:1732094036.291065 1829910 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:01.231  I0000 00:00:1732094036.292697 1829918 subchannel.cc:806] subchannel 0x5598e6ade180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5598e69eb1c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5598e6a8f460, grpc.internal.client_channel_call_destination=0x7f9e37c5a390, grpc.internal.event_engine=0x5598e6a51440, grpc.internal.security_connector=0x5598e693ada0, grpc.internal.subchannel_pool=0x5598e6ac5c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5598e670e2f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:13:56.292192597+01:00"}), backing off for 1000 ms
00:17:01.231  Traceback (most recent call last):
00:17:01.231    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:01.231      main(sys.argv[1:])
00:17:01.231    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:01.231      result = client.call(request['method'], request.get('params', {}))
00:17:01.231               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:01.231    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:01.231      response = func(request=json_format.ParseDict(params, input()))
00:17:01.231                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:01.231    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:01.231      return _end_unary_response_blocking(state, call, False, None)
00:17:01.231             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:01.231    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:01.231      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:01.231      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:01.231  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:01.231  	status = StatusCode.INVALID_ARGUMENT
00:17:01.231  	details = "Invalid volume uuid"
00:17:01.231  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Invalid volume uuid", grpc_status:3, created_time:"2024-11-20T10:13:56.299675584+01:00"}"
00:17:01.231  >
00:17:01.231   10:13:56 sma.sma_vhost -- common/autotest_common.sh@655 -- # es=1
00:17:01.231   10:13:56 sma.sma_vhost -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:01.231   10:13:56 sma.sma_vhost -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:01.231   10:13:56 sma.sma_vhost -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:01.231   10:13:56 sma.sma_vhost -- sma/vhost_blk.sh@333 -- # diff /dev/fd/62 /dev/fd/61
00:17:01.231    10:13:56 sma.sma_vhost -- sma/vhost_blk.sh@333 -- # jq --sort-keys
00:17:01.231    10:13:56 sma.sma_vhost -- sma/vhost_blk.sh@333 -- # rpc_cmd bdev_get_bdevs -b 822ce54e-4577-464b-a58e-36213d6b54db
00:17:01.231    10:13:56 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:01.231    10:13:56 sma.sma_vhost -- sma/vhost_blk.sh@333 -- # jq --sort-keys '.[].assigned_rate_limits'
00:17:01.231    10:13:56 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:01.231    10:13:56 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:01.488   10:13:56 sma.sma_vhost -- sma/vhost_blk.sh@344 -- # delete_device virtio_blk:sma-0
00:17:01.488   10:13:56 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:01.488  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:01.488  I0000 00:00:1732094036.597837 1829989 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:01.488  I0000 00:00:1732094036.599553 1829989 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:01.489  I0000 00:00:1732094036.601036 1830063 subchannel.cc:806] subchannel 0x55882fcea180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55882fbf71c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55882fc9b460, grpc.internal.client_channel_call_destination=0x7f75423f7390, grpc.internal.event_engine=0x55882fc5d440, grpc.internal.security_connector=0x55882fcd1d00, grpc.internal.subchannel_pool=0x55882fcd1c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55882f91a2f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:13:56.600556102+01:00"}), backing off for 999 ms
00:17:01.747  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:17:01.747  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000000):
00:17:01.747  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 1
00:17:01.747  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:17:01.747  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:17:01.747  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 0
00:17:01.747  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:17:01.747  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:17:01.747  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:17:01.747  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:17:01.747  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 0
00:17:01.747  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:17:01.747  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 1
00:17:01.747  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:17:01.747  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 file:0
00:17:01.747  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:17:01.747  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 file:50
00:17:01.747  VHOST_CONFIG: (/var/tmp/sma-0) vhost peer closed
00:17:01.747  {}
00:17:01.747   10:13:56 sma.sma_vhost -- sma/vhost_blk.sh@346 -- # cleanup
00:17:01.747   10:13:56 sma.sma_vhost -- sma/vhost_blk.sh@14 -- # killprocess 1827183
00:17:01.747   10:13:56 sma.sma_vhost -- common/autotest_common.sh@954 -- # '[' -z 1827183 ']'
00:17:01.747   10:13:56 sma.sma_vhost -- common/autotest_common.sh@958 -- # kill -0 1827183
00:17:01.747    10:13:56 sma.sma_vhost -- common/autotest_common.sh@959 -- # uname
00:17:01.747   10:13:56 sma.sma_vhost -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:17:01.747    10:13:56 sma.sma_vhost -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1827183
00:17:02.007   10:13:56 sma.sma_vhost -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:17:02.007   10:13:56 sma.sma_vhost -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:17:02.007   10:13:56 sma.sma_vhost -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1827183'
00:17:02.007  killing process with pid 1827183
00:17:02.007   10:13:56 sma.sma_vhost -- common/autotest_common.sh@973 -- # kill 1827183
00:17:02.007   10:13:56 sma.sma_vhost -- common/autotest_common.sh@978 -- # wait 1827183
00:17:02.945   10:13:57 sma.sma_vhost -- sma/vhost_blk.sh@15 -- # killprocess 1827415
00:17:02.945   10:13:57 sma.sma_vhost -- common/autotest_common.sh@954 -- # '[' -z 1827415 ']'
00:17:02.945   10:13:57 sma.sma_vhost -- common/autotest_common.sh@958 -- # kill -0 1827415
00:17:02.945    10:13:57 sma.sma_vhost -- common/autotest_common.sh@959 -- # uname
00:17:02.945   10:13:57 sma.sma_vhost -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:17:02.945    10:13:57 sma.sma_vhost -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1827415
00:17:02.945   10:13:57 sma.sma_vhost -- common/autotest_common.sh@960 -- # process_name=python3
00:17:02.945   10:13:57 sma.sma_vhost -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:17:02.945   10:13:57 sma.sma_vhost -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1827415'
00:17:02.945  killing process with pid 1827415
00:17:02.945   10:13:57 sma.sma_vhost -- common/autotest_common.sh@973 -- # kill 1827415
00:17:02.945   10:13:57 sma.sma_vhost -- common/autotest_common.sh@978 -- # wait 1827415
00:17:02.945   10:13:57 sma.sma_vhost -- sma/vhost_blk.sh@16 -- # vm_kill_all
00:17:02.945   10:13:57 sma.sma_vhost -- vhost/common.sh@476 -- # local vm
00:17:02.945    10:13:57 sma.sma_vhost -- vhost/common.sh@477 -- # vm_list_all
00:17:02.945    10:13:57 sma.sma_vhost -- vhost/common.sh@466 -- # vms=()
00:17:02.945    10:13:57 sma.sma_vhost -- vhost/common.sh@466 -- # local vms
00:17:02.945    10:13:57 sma.sma_vhost -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:17:02.945    10:13:57 sma.sma_vhost -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:17:02.945    10:13:57 sma.sma_vhost -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/0
00:17:02.945   10:13:57 sma.sma_vhost -- vhost/common.sh@477 -- # for vm in $(vm_list_all)
00:17:02.945   10:13:57 sma.sma_vhost -- vhost/common.sh@478 -- # vm_kill 0
00:17:02.945   10:13:57 sma.sma_vhost -- vhost/common.sh@442 -- # vm_num_is_valid 0
00:17:02.945   10:13:57 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:17:02.945   10:13:57 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:17:02.945   10:13:57 sma.sma_vhost -- vhost/common.sh@443 -- # local vm_dir=/root/vhost_test/vms/0
00:17:02.945   10:13:57 sma.sma_vhost -- vhost/common.sh@445 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:17:02.945   10:13:57 sma.sma_vhost -- vhost/common.sh@449 -- # local vm_pid
00:17:02.945    10:13:57 sma.sma_vhost -- vhost/common.sh@450 -- # cat /root/vhost_test/vms/0/qemu.pid
00:17:02.945   10:13:57 sma.sma_vhost -- vhost/common.sh@450 -- # vm_pid=1824409
00:17:02.945   10:13:57 sma.sma_vhost -- vhost/common.sh@452 -- # notice 'Killing virtual machine /root/vhost_test/vms/0 (pid=1824409)'
00:17:02.945   10:13:57 sma.sma_vhost -- vhost/common.sh@94 -- # message INFO 'Killing virtual machine /root/vhost_test/vms/0 (pid=1824409)'
00:17:02.945   10:13:57 sma.sma_vhost -- vhost/common.sh@60 -- # local verbose_out
00:17:02.945   10:13:57 sma.sma_vhost -- vhost/common.sh@61 -- # false
00:17:02.945   10:13:57 sma.sma_vhost -- vhost/common.sh@62 -- # verbose_out=
00:17:02.945   10:13:57 sma.sma_vhost -- vhost/common.sh@69 -- # local msg_type=INFO
00:17:02.945   10:13:57 sma.sma_vhost -- vhost/common.sh@70 -- # shift
00:17:02.945   10:13:57 sma.sma_vhost -- vhost/common.sh@71 -- # echo -e 'INFO: Killing virtual machine /root/vhost_test/vms/0 (pid=1824409)'
00:17:02.945  INFO: Killing virtual machine /root/vhost_test/vms/0 (pid=1824409)
00:17:02.945   10:13:57 sma.sma_vhost -- vhost/common.sh@454 -- # /bin/kill 1824409
00:17:02.945   10:13:57 sma.sma_vhost -- vhost/common.sh@455 -- # notice 'process 1824409 killed'
00:17:02.945   10:13:57 sma.sma_vhost -- vhost/common.sh@94 -- # message INFO 'process 1824409 killed'
00:17:02.945   10:13:57 sma.sma_vhost -- vhost/common.sh@60 -- # local verbose_out
00:17:02.945   10:13:57 sma.sma_vhost -- vhost/common.sh@61 -- # false
00:17:02.945   10:13:57 sma.sma_vhost -- vhost/common.sh@62 -- # verbose_out=
00:17:02.945   10:13:57 sma.sma_vhost -- vhost/common.sh@69 -- # local msg_type=INFO
00:17:02.945   10:13:57 sma.sma_vhost -- vhost/common.sh@70 -- # shift
00:17:02.945   10:13:57 sma.sma_vhost -- vhost/common.sh@71 -- # echo -e 'INFO: process 1824409 killed'
00:17:02.945  INFO: process 1824409 killed
00:17:02.945   10:13:57 sma.sma_vhost -- vhost/common.sh@456 -- # rm -rf /root/vhost_test/vms/0
00:17:02.945   10:13:57 sma.sma_vhost -- vhost/common.sh@481 -- # rm -rf /root/vhost_test/vms
00:17:02.945   10:13:57 sma.sma_vhost -- sma/vhost_blk.sh@347 -- # trap - SIGINT SIGTERM EXIT
00:17:02.945  
00:17:02.945  real	0m43.718s
00:17:02.945  user	0m45.638s
00:17:02.945  sys	0m2.812s
00:17:02.945   10:13:57 sma.sma_vhost -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:02.945   10:13:57 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:02.945  ************************************
00:17:02.945  END TEST sma_vhost
00:17:02.945  ************************************
00:17:02.945   10:13:58 sma -- sma/sma.sh@16 -- # run_test sma_crypto /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/crypto.sh
00:17:02.945   10:13:58 sma -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:02.945   10:13:58 sma -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:02.945   10:13:58 sma -- common/autotest_common.sh@10 -- # set +x
00:17:02.945  ************************************
00:17:02.945  START TEST sma_crypto
00:17:02.945  ************************************
00:17:02.945   10:13:58 sma.sma_crypto -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/crypto.sh
00:17:03.204  * Looking for test storage...
00:17:03.204  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:17:03.204    10:13:58 sma.sma_crypto -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:17:03.204     10:13:58 sma.sma_crypto -- common/autotest_common.sh@1693 -- # lcov --version
00:17:03.204     10:13:58 sma.sma_crypto -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:17:03.204    10:13:58 sma.sma_crypto -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:17:03.204    10:13:58 sma.sma_crypto -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:17:03.204    10:13:58 sma.sma_crypto -- scripts/common.sh@333 -- # local ver1 ver1_l
00:17:03.204    10:13:58 sma.sma_crypto -- scripts/common.sh@334 -- # local ver2 ver2_l
00:17:03.204    10:13:58 sma.sma_crypto -- scripts/common.sh@336 -- # IFS=.-:
00:17:03.204    10:13:58 sma.sma_crypto -- scripts/common.sh@336 -- # read -ra ver1
00:17:03.204    10:13:58 sma.sma_crypto -- scripts/common.sh@337 -- # IFS=.-:
00:17:03.204    10:13:58 sma.sma_crypto -- scripts/common.sh@337 -- # read -ra ver2
00:17:03.204    10:13:58 sma.sma_crypto -- scripts/common.sh@338 -- # local 'op=<'
00:17:03.204    10:13:58 sma.sma_crypto -- scripts/common.sh@340 -- # ver1_l=2
00:17:03.204    10:13:58 sma.sma_crypto -- scripts/common.sh@341 -- # ver2_l=1
00:17:03.204    10:13:58 sma.sma_crypto -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:17:03.204    10:13:58 sma.sma_crypto -- scripts/common.sh@344 -- # case "$op" in
00:17:03.204    10:13:58 sma.sma_crypto -- scripts/common.sh@345 -- # : 1
00:17:03.204    10:13:58 sma.sma_crypto -- scripts/common.sh@364 -- # (( v = 0 ))
00:17:03.204    10:13:58 sma.sma_crypto -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:17:03.204     10:13:58 sma.sma_crypto -- scripts/common.sh@365 -- # decimal 1
00:17:03.204     10:13:58 sma.sma_crypto -- scripts/common.sh@353 -- # local d=1
00:17:03.204     10:13:58 sma.sma_crypto -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:03.204     10:13:58 sma.sma_crypto -- scripts/common.sh@355 -- # echo 1
00:17:03.204    10:13:58 sma.sma_crypto -- scripts/common.sh@365 -- # ver1[v]=1
00:17:03.204     10:13:58 sma.sma_crypto -- scripts/common.sh@366 -- # decimal 2
00:17:03.204     10:13:58 sma.sma_crypto -- scripts/common.sh@353 -- # local d=2
00:17:03.204     10:13:58 sma.sma_crypto -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:17:03.204     10:13:58 sma.sma_crypto -- scripts/common.sh@355 -- # echo 2
00:17:03.204    10:13:58 sma.sma_crypto -- scripts/common.sh@366 -- # ver2[v]=2
00:17:03.204    10:13:58 sma.sma_crypto -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:17:03.204    10:13:58 sma.sma_crypto -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:17:03.204    10:13:58 sma.sma_crypto -- scripts/common.sh@368 -- # return 0
00:17:03.204    10:13:58 sma.sma_crypto -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:17:03.204    10:13:58 sma.sma_crypto -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:17:03.204  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:03.204  		--rc genhtml_branch_coverage=1
00:17:03.204  		--rc genhtml_function_coverage=1
00:17:03.204  		--rc genhtml_legend=1
00:17:03.204  		--rc geninfo_all_blocks=1
00:17:03.204  		--rc geninfo_unexecuted_blocks=1
00:17:03.204  		
00:17:03.204  		'
00:17:03.204    10:13:58 sma.sma_crypto -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:17:03.204  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:03.204  		--rc genhtml_branch_coverage=1
00:17:03.204  		--rc genhtml_function_coverage=1
00:17:03.204  		--rc genhtml_legend=1
00:17:03.204  		--rc geninfo_all_blocks=1
00:17:03.204  		--rc geninfo_unexecuted_blocks=1
00:17:03.204  		
00:17:03.204  		'
00:17:03.204    10:13:58 sma.sma_crypto -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:17:03.204  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:03.204  		--rc genhtml_branch_coverage=1
00:17:03.204  		--rc genhtml_function_coverage=1
00:17:03.204  		--rc genhtml_legend=1
00:17:03.204  		--rc geninfo_all_blocks=1
00:17:03.204  		--rc geninfo_unexecuted_blocks=1
00:17:03.204  		
00:17:03.204  		'
00:17:03.204    10:13:58 sma.sma_crypto -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:17:03.204  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:03.204  		--rc genhtml_branch_coverage=1
00:17:03.204  		--rc genhtml_function_coverage=1
00:17:03.205  		--rc genhtml_legend=1
00:17:03.205  		--rc geninfo_all_blocks=1
00:17:03.205  		--rc geninfo_unexecuted_blocks=1
00:17:03.205  		
00:17:03.205  		'
00:17:03.205   10:13:58 sma.sma_crypto -- sma/crypto.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:17:03.205   10:13:58 sma.sma_crypto -- sma/crypto.sh@13 -- # rpc_py=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:17:03.205   10:13:58 sma.sma_crypto -- sma/crypto.sh@14 -- # localnqn=nqn.2016-06.io.spdk:cnode0
00:17:03.205   10:13:58 sma.sma_crypto -- sma/crypto.sh@15 -- # tgtnqn=nqn.2016-06.io.spdk:tgt0
00:17:03.205   10:13:58 sma.sma_crypto -- sma/crypto.sh@16 -- # key0=1234567890abcdef1234567890abcdef
00:17:03.205   10:13:58 sma.sma_crypto -- sma/crypto.sh@17 -- # key1=deadbeefcafebabefeedbeefbabecafe
00:17:03.205   10:13:58 sma.sma_crypto -- sma/crypto.sh@18 -- # tgtsock=/var/tmp/spdk.sock2
00:17:03.205   10:13:58 sma.sma_crypto -- sma/crypto.sh@19 -- # discovery_port=8009
00:17:03.205   10:13:58 sma.sma_crypto -- sma/crypto.sh@145 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:17:03.205   10:13:58 sma.sma_crypto -- sma/crypto.sh@148 -- # hostpid=1830300
00:17:03.205   10:13:58 sma.sma_crypto -- sma/crypto.sh@147 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --wait-for-rpc
00:17:03.205   10:13:58 sma.sma_crypto -- sma/crypto.sh@150 -- # waitforlisten 1830300
00:17:03.205   10:13:58 sma.sma_crypto -- common/autotest_common.sh@835 -- # '[' -z 1830300 ']'
00:17:03.205   10:13:58 sma.sma_crypto -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:17:03.205   10:13:58 sma.sma_crypto -- common/autotest_common.sh@840 -- # local max_retries=100
00:17:03.205   10:13:58 sma.sma_crypto -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:17:03.205  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:17:03.205   10:13:58 sma.sma_crypto -- common/autotest_common.sh@844 -- # xtrace_disable
00:17:03.205   10:13:58 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:03.205  [2024-11-20 10:13:58.297407] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:17:03.205  [2024-11-20 10:13:58.297556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1830300 ]
00:17:03.464  EAL: No free 2048 kB hugepages reported on node 1
00:17:03.464  [2024-11-20 10:13:58.427254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:03.464  [2024-11-20 10:13:58.540715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:17:04.399   10:13:59 sma.sma_crypto -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:17:04.399   10:13:59 sma.sma_crypto -- common/autotest_common.sh@868 -- # return 0
00:17:04.399   10:13:59 sma.sma_crypto -- sma/crypto.sh@153 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py dpdk_cryptodev_scan_accel_module
00:17:04.399   10:13:59 sma.sma_crypto -- sma/crypto.sh@154 -- # rpc_cmd dpdk_cryptodev_set_driver -d crypto_aesni_mb
00:17:04.399   10:13:59 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:04.399   10:13:59 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:04.399  [2024-11-20 10:13:59.496000] accel_dpdk_cryptodev.c: 224:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb
00:17:04.399   10:13:59 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:04.399   10:13:59 sma.sma_crypto -- sma/crypto.sh@155 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py accel_assign_opc -o encrypt -m dpdk_cryptodev
00:17:04.658  [2024-11-20 10:13:59.760724] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev
00:17:04.658   10:13:59 sma.sma_crypto -- sma/crypto.sh@156 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py accel_assign_opc -o decrypt -m dpdk_cryptodev
00:17:05.230  [2024-11-20 10:14:00.037540] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev
00:17:05.230   10:14:00 sma.sma_crypto -- sma/crypto.sh@157 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py framework_start_init
00:17:05.490  [2024-11-20 10:14:00.582970] accel_dpdk_cryptodev.c:1179:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 1
00:17:06.429   10:14:01 sma.sma_crypto -- sma/crypto.sh@160 -- # tgtpid=1830698
00:17:06.429   10:14:01 sma.sma_crypto -- sma/crypto.sh@159 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/spdk.sock2 -m 0x2
00:17:06.429   10:14:01 sma.sma_crypto -- sma/crypto.sh@172 -- # smapid=1830699
00:17:06.429   10:14:01 sma.sma_crypto -- sma/crypto.sh@175 -- # sma_waitforlisten
00:17:06.429   10:14:01 sma.sma_crypto -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:17:06.429   10:14:01 sma.sma_crypto -- sma/common.sh@8 -- # local sma_port=8080
00:17:06.429   10:14:01 sma.sma_crypto -- sma/common.sh@10 -- # (( i = 0 ))
00:17:06.429   10:14:01 sma.sma_crypto -- sma/common.sh@10 -- # (( i < 5 ))
00:17:06.429   10:14:01 sma.sma_crypto -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:17:06.429   10:14:01 sma.sma_crypto -- sma/crypto.sh@162 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:17:06.429    10:14:01 sma.sma_crypto -- sma/crypto.sh@162 -- # cat
00:17:06.429   10:14:01 sma.sma_crypto -- sma/common.sh@14 -- # sleep 1s
00:17:06.429  [2024-11-20 10:14:01.328328] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:17:06.429  [2024-11-20 10:14:01.328466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1830698 ]
00:17:06.429  EAL: No free 2048 kB hugepages reported on node 1
00:17:06.429  [2024-11-20 10:14:01.462049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:06.429  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:06.429  I0000 00:00:1732094041.483790 1830699 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:06.429  [2024-11-20 10:14:01.497967] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:17:06.689  [2024-11-20 10:14:01.582873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:17:07.255   10:14:02 sma.sma_crypto -- sma/common.sh@10 -- # (( i++ ))
00:17:07.255   10:14:02 sma.sma_crypto -- sma/common.sh@10 -- # (( i < 5 ))
00:17:07.255   10:14:02 sma.sma_crypto -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:17:07.255   10:14:02 sma.sma_crypto -- sma/common.sh@12 -- # return 0
00:17:07.255    10:14:02 sma.sma_crypto -- sma/crypto.sh@178 -- # uuidgen
00:17:07.255   10:14:02 sma.sma_crypto -- sma/crypto.sh@178 -- # uuid=334e37fb-3ac1-4a09-afeb-bfc37ef34708
00:17:07.255   10:14:02 sma.sma_crypto -- sma/crypto.sh@179 -- # waitforlisten 1830698 /var/tmp/spdk.sock2
00:17:07.255   10:14:02 sma.sma_crypto -- common/autotest_common.sh@835 -- # '[' -z 1830698 ']'
00:17:07.255   10:14:02 sma.sma_crypto -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock2
00:17:07.255   10:14:02 sma.sma_crypto -- common/autotest_common.sh@840 -- # local max_retries=100
00:17:07.255   10:14:02 sma.sma_crypto -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock2...'
00:17:07.255  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock2...
00:17:07.255   10:14:02 sma.sma_crypto -- common/autotest_common.sh@844 -- # xtrace_disable
00:17:07.255   10:14:02 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:07.514   10:14:02 sma.sma_crypto -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:17:07.514   10:14:02 sma.sma_crypto -- common/autotest_common.sh@868 -- # return 0
00:17:07.514   10:14:02 sma.sma_crypto -- sma/crypto.sh@180 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock2
00:17:07.774  [2024-11-20 10:14:02.865967] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:17:07.774  [2024-11-20 10:14:02.882455] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 8009 ***
00:17:07.774  [2024-11-20 10:14:02.890229] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 ***
00:17:08.033  malloc0
00:17:08.033    10:14:02 sma.sma_crypto -- sma/crypto.sh@190 -- # create_device
00:17:08.033    10:14:02 sma.sma_crypto -- sma/crypto.sh@190 -- # jq -r .handle
00:17:08.033    10:14:02 sma.sma_crypto -- sma/crypto.sh@77 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:08.033  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:08.033  I0000 00:00:1732094043.145635 1830880 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:08.033  I0000 00:00:1732094043.147607 1830880 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:08.033  I0000 00:00:1732094043.149239 1830881 subchannel.cc:806] subchannel 0x561dd097a180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x561dd08871c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x561dd092b460, grpc.internal.client_channel_call_destination=0x7fe66b48c390, grpc.internal.event_engine=0x561dd08ed440, grpc.internal.security_connector=0x561dd07e3650, grpc.internal.subchannel_pool=0x561dd0961c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x561dd05aa2f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:14:03.148716809+01:00"}), backing off for 1000 ms
00:17:08.292  [2024-11-20 10:14:03.171494] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:17:08.292   10:14:03 sma.sma_crypto -- sma/crypto.sh@190 -- # device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:08.292   10:14:03 sma.sma_crypto -- sma/crypto.sh@193 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 334e37fb-3ac1-4a09-afeb-bfc37ef34708
00:17:08.292   10:14:03 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:08.292   10:14:03 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:17:08.292   10:14:03 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:08.292    10:14:03 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 334e37fb-3ac1-4a09-afeb-bfc37ef34708
00:17:08.292    10:14:03 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=334e37fb-3ac1-4a09-afeb-bfc37ef34708 cipher= key= key2= config
00:17:08.292    10:14:03 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:17:08.292     10:14:03 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:17:08.292      10:14:03 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 334e37fb-3ac1-4a09-afeb-bfc37ef34708
00:17:08.292      10:14:03 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:08.292    10:14:03 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "M043+zrBSgmv67/DfvNHCA==",
00:17:08.292  "nvmf": {
00:17:08.292    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:17:08.292    "discovery": {
00:17:08.292      "discovery_endpoints": [
00:17:08.292        {
00:17:08.292          "trtype": "tcp",
00:17:08.292          "traddr": "127.0.0.1",
00:17:08.292          "trsvcid": "8009"
00:17:08.292        }
00:17:08.292      ]
00:17:08.292    }
00:17:08.292  }'
00:17:08.292    10:14:03 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:17:08.292    10:14:03 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:17:08.292    10:14:03 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n '' ]]
00:17:08.292    10:14:03 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:17:08.551  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:08.551  I0000 00:00:1732094043.499928 1830936 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:08.551  I0000 00:00:1732094043.501607 1830936 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:08.551  I0000 00:00:1732094043.503243 1831037 subchannel.cc:806] subchannel 0x557c03c93180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x557c03ba01c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x557c03c44460, grpc.internal.client_channel_call_destination=0x7fad69f67390, grpc.internal.event_engine=0x557c03afc670, grpc.internal.security_connector=0x557c03ad1600, grpc.internal.subchannel_pool=0x557c03b6c7c0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x557c039b1a80, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:14:03.502692176+01:00"}), backing off for 999 ms
00:17:09.963  {}
00:17:09.963    10:14:04 sma.sma_crypto -- sma/crypto.sh@195 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:09.963    10:14:04 sma.sma_crypto -- sma/crypto.sh@195 -- # jq -r '.[0].namespaces[0].name'
00:17:09.963    10:14:04 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:09.963    10:14:04 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:09.963    10:14:04 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:09.963   10:14:04 sma.sma_crypto -- sma/crypto.sh@195 -- # ns_bdev=e157144e-f931-4247-80f9-a52347d8920b0n1
00:17:09.963    10:14:04 sma.sma_crypto -- sma/crypto.sh@196 -- # rpc_cmd bdev_get_bdevs -b e157144e-f931-4247-80f9-a52347d8920b0n1
00:17:09.963    10:14:04 sma.sma_crypto -- sma/crypto.sh@196 -- # jq -r '.[0].product_name'
00:17:09.963    10:14:04 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:09.963    10:14:04 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:09.963    10:14:04 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:09.963   10:14:04 sma.sma_crypto -- sma/crypto.sh@196 -- # [[ NVMe disk == \N\V\M\e\ \d\i\s\k ]]
00:17:09.963    10:14:04 sma.sma_crypto -- sma/crypto.sh@197 -- # rpc_cmd bdev_get_bdevs
00:17:09.963    10:14:04 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:09.963    10:14:04 sma.sma_crypto -- sma/crypto.sh@197 -- # jq -r '[.[] | select(.product_name == "crypto")] | length'
00:17:09.963    10:14:04 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:09.963    10:14:04 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:09.963   10:14:04 sma.sma_crypto -- sma/crypto.sh@197 -- # [[ 0 -eq 0 ]]
00:17:09.963    10:14:04 sma.sma_crypto -- sma/crypto.sh@198 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:09.963    10:14:04 sma.sma_crypto -- sma/crypto.sh@198 -- # jq -r '.[0].namespaces[0].uuid'
00:17:09.963    10:14:04 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:09.963    10:14:04 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:09.963    10:14:04 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:09.963   10:14:04 sma.sma_crypto -- sma/crypto.sh@198 -- # [[ 334e37fb-3ac1-4a09-afeb-bfc37ef34708 == \3\3\4\e\3\7\f\b\-\3\a\c\1\-\4\a\0\9\-\a\f\e\b\-\b\f\c\3\7\e\f\3\4\7\0\8 ]]
00:17:09.963    10:14:04 sma.sma_crypto -- sma/crypto.sh@199 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:09.963    10:14:04 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:09.963    10:14:04 sma.sma_crypto -- sma/crypto.sh@199 -- # jq -r '.[0].namespaces[0].nguid'
00:17:09.963    10:14:04 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:09.964    10:14:04 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:09.964    10:14:04 sma.sma_crypto -- sma/crypto.sh@199 -- # uuid2nguid 334e37fb-3ac1-4a09-afeb-bfc37ef34708
00:17:09.964    10:14:04 sma.sma_crypto -- sma/common.sh@40 -- # local uuid=334E37FB-3AC1-4A09-AFEB-BFC37EF34708
00:17:09.964    10:14:04 sma.sma_crypto -- sma/common.sh@41 -- # echo 334E37FB3AC14A09AFEBBFC37EF34708
00:17:09.964   10:14:04 sma.sma_crypto -- sma/crypto.sh@199 -- # [[ 334E37FB3AC14A09AFEBBFC37EF34708 == \3\3\4\E\3\7\F\B\3\A\C\1\4\A\0\9\A\F\E\B\B\F\C\3\7\E\F\3\4\7\0\8 ]]
00:17:09.964   10:14:04 sma.sma_crypto -- sma/crypto.sh@201 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 334e37fb-3ac1-4a09-afeb-bfc37ef34708
00:17:09.964   10:14:04 sma.sma_crypto -- sma/crypto.sh@120 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:09.964    10:14:04 sma.sma_crypto -- sma/crypto.sh@120 -- # uuid2base64 334e37fb-3ac1-4a09-afeb-bfc37ef34708
00:17:09.964    10:14:04 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:10.247  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:10.247  I0000 00:00:1732094045.151737 1831211 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:10.247  I0000 00:00:1732094045.153759 1831211 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:10.247  I0000 00:00:1732094045.155366 1831221 subchannel.cc:806] subchannel 0x55cee70c5180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55cee6fd21c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55cee7076460, grpc.internal.client_channel_call_destination=0x7f771c8d6390, grpc.internal.event_engine=0x55cee7038440, grpc.internal.security_connector=0x55cee6f2e650, grpc.internal.subchannel_pool=0x55cee70acc10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55cee6cf52f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:14:05.154867407+01:00"}), backing off for 1000 ms
00:17:10.247  {}
00:17:10.247   10:14:05 sma.sma_crypto -- sma/crypto.sh@204 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 334e37fb-3ac1-4a09-afeb-bfc37ef34708 AES_CBC 1234567890abcdef1234567890abcdef
00:17:10.247   10:14:05 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:10.247   10:14:05 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:17:10.247   10:14:05 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:10.247    10:14:05 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 334e37fb-3ac1-4a09-afeb-bfc37ef34708 AES_CBC 1234567890abcdef1234567890abcdef
00:17:10.247    10:14:05 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=334e37fb-3ac1-4a09-afeb-bfc37ef34708 cipher=AES_CBC key=1234567890abcdef1234567890abcdef key2= config
00:17:10.247    10:14:05 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:17:10.247     10:14:05 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:17:10.247      10:14:05 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 334e37fb-3ac1-4a09-afeb-bfc37ef34708
00:17:10.247      10:14:05 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:10.247    10:14:05 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "M043+zrBSgmv67/DfvNHCA==",
00:17:10.247  "nvmf": {
00:17:10.247    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:17:10.247    "discovery": {
00:17:10.247      "discovery_endpoints": [
00:17:10.247        {
00:17:10.247          "trtype": "tcp",
00:17:10.247          "traddr": "127.0.0.1",
00:17:10.247          "trsvcid": "8009"
00:17:10.247        }
00:17:10.247      ]
00:17:10.247    }
00:17:10.248  }'
00:17:10.248    10:14:05 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:17:10.248    10:14:05 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:17:10.248    10:14:05 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_CBC ]]
00:17:10.248    10:14:05 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:17:10.248     10:14:05 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_CBC
00:17:10.248     10:14:05 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:17:10.248     10:14:05 sma.sma_crypto -- sma/common.sh@28 -- # echo 0
00:17:10.248    10:14:05 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:17:10.248     10:14:05 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:17:10.248     10:14:05 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:17:10.248      10:14:05 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:17:10.248    10:14:05 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:17:10.248     10:14:05 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:17:10.248    10:14:05 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:17:10.248    "cipher": 0,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:17:10.248  }'
00:17:10.248    10:14:05 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:17:10.248    10:14:05 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:17:10.508  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:10.508  I0000 00:00:1732094045.536892 1831241 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:10.508  I0000 00:00:1732094045.538739 1831241 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:10.508  I0000 00:00:1732094045.540526 1831379 subchannel.cc:806] subchannel 0x564533875180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5645337821c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x564533826460, grpc.internal.client_channel_call_destination=0x7fe9ca8ae390, grpc.internal.event_engine=0x5645337e8440, grpc.internal.security_connector=0x56453385cd00, grpc.internal.subchannel_pool=0x56453385cc10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5645334a52f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:14:05.540024064+01:00"}), backing off for 1000 ms
00:17:11.885  {}
00:17:11.885    10:14:06 sma.sma_crypto -- sma/crypto.sh@206 -- # rpc_cmd bdev_nvme_get_discovery_info
00:17:11.885    10:14:06 sma.sma_crypto -- sma/crypto.sh@206 -- # jq -r '. | length'
00:17:11.885    10:14:06 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:11.885    10:14:06 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:11.885    10:14:06 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:11.885   10:14:06 sma.sma_crypto -- sma/crypto.sh@206 -- # [[ 1 -eq 1 ]]
00:17:11.885    10:14:06 sma.sma_crypto -- sma/crypto.sh@207 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:11.885    10:14:06 sma.sma_crypto -- sma/crypto.sh@207 -- # jq -r '.[0].namespaces | length'
00:17:11.885    10:14:06 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:11.885    10:14:06 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:11.885    10:14:06 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:11.885   10:14:06 sma.sma_crypto -- sma/crypto.sh@207 -- # [[ 1 -eq 1 ]]
00:17:11.885   10:14:06 sma.sma_crypto -- sma/crypto.sh@209 -- # verify_crypto_volume nqn.2016-06.io.spdk:cnode0 334e37fb-3ac1-4a09-afeb-bfc37ef34708
00:17:11.885   10:14:06 sma.sma_crypto -- sma/crypto.sh@132 -- # local nqn=nqn.2016-06.io.spdk:cnode0 uuid=334e37fb-3ac1-4a09-afeb-bfc37ef34708 ns ns_bdev
00:17:11.885    10:14:06 sma.sma_crypto -- sma/crypto.sh@134 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:11.885    10:14:06 sma.sma_crypto -- sma/crypto.sh@134 -- # jq -r '.[0].namespaces[0]'
00:17:11.885    10:14:06 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:11.885    10:14:06 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:11.885    10:14:06 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:11.885   10:14:06 sma.sma_crypto -- sma/crypto.sh@134 -- # ns='{
00:17:11.885    "nsid": 1,
00:17:11.885    "bdev_name": "d1113dc9-5a2a-4341-bf86-2975b5d8433a",
00:17:11.885    "name": "d1113dc9-5a2a-4341-bf86-2975b5d8433a",
00:17:11.885    "nguid": "334E37FB3AC14A09AFEBBFC37EF34708",
00:17:11.885    "uuid": "334e37fb-3ac1-4a09-afeb-bfc37ef34708"
00:17:11.885  }'
00:17:11.885    10:14:06 sma.sma_crypto -- sma/crypto.sh@135 -- # jq -r .name
00:17:11.885   10:14:06 sma.sma_crypto -- sma/crypto.sh@135 -- # ns_bdev=d1113dc9-5a2a-4341-bf86-2975b5d8433a
00:17:11.885    10:14:06 sma.sma_crypto -- sma/crypto.sh@138 -- # rpc_cmd bdev_get_bdevs -b d1113dc9-5a2a-4341-bf86-2975b5d8433a
00:17:11.885    10:14:06 sma.sma_crypto -- sma/crypto.sh@138 -- # jq -r '.[0].product_name'
00:17:11.885    10:14:06 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:11.885    10:14:06 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:11.885    10:14:06 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:11.885   10:14:06 sma.sma_crypto -- sma/crypto.sh@138 -- # [[ crypto == crypto ]]
00:17:11.885    10:14:06 sma.sma_crypto -- sma/crypto.sh@139 -- # rpc_cmd bdev_get_bdevs
00:17:11.885    10:14:06 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:11.885    10:14:06 sma.sma_crypto -- sma/crypto.sh@139 -- # jq -r '[.[] | select(.product_name == "crypto")] | length'
00:17:11.885    10:14:06 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:11.885    10:14:06 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:11.885   10:14:06 sma.sma_crypto -- sma/crypto.sh@139 -- # [[ 1 -eq 1 ]]
00:17:11.885    10:14:06 sma.sma_crypto -- sma/crypto.sh@141 -- # jq -r .uuid
00:17:11.886   10:14:06 sma.sma_crypto -- sma/crypto.sh@141 -- # [[ 334e37fb-3ac1-4a09-afeb-bfc37ef34708 == \3\3\4\e\3\7\f\b\-\3\a\c\1\-\4\a\0\9\-\a\f\e\b\-\b\f\c\3\7\e\f\3\4\7\0\8 ]]
00:17:11.886    10:14:06 sma.sma_crypto -- sma/crypto.sh@142 -- # jq -r .nguid
00:17:12.144    10:14:07 sma.sma_crypto -- sma/crypto.sh@142 -- # uuid2nguid 334e37fb-3ac1-4a09-afeb-bfc37ef34708
00:17:12.144    10:14:07 sma.sma_crypto -- sma/common.sh@40 -- # local uuid=334E37FB-3AC1-4A09-AFEB-BFC37EF34708
00:17:12.144    10:14:07 sma.sma_crypto -- sma/common.sh@41 -- # echo 334E37FB3AC14A09AFEBBFC37EF34708
00:17:12.144   10:14:07 sma.sma_crypto -- sma/crypto.sh@142 -- # [[ 334E37FB3AC14A09AFEBBFC37EF34708 == \3\3\4\E\3\7\F\B\3\A\C\1\4\A\0\9\A\F\E\B\B\F\C\3\7\E\F\3\4\7\0\8 ]]
00:17:12.144    10:14:07 sma.sma_crypto -- sma/crypto.sh@211 -- # rpc_cmd bdev_get_bdevs
00:17:12.144    10:14:07 sma.sma_crypto -- sma/crypto.sh@211 -- # jq -r '.[] | select(.product_name == "crypto")'
00:17:12.144    10:14:07 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:12.144    10:14:07 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:12.144    10:14:07 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:12.144   10:14:07 sma.sma_crypto -- sma/crypto.sh@211 -- # crypto_bdev='{
00:17:12.144    "name": "d1113dc9-5a2a-4341-bf86-2975b5d8433a",
00:17:12.144    "aliases": [
00:17:12.144      "4a1a0c2c-2603-5d36-b8d1-115c5e378b53"
00:17:12.144    ],
00:17:12.144    "product_name": "crypto",
00:17:12.144    "block_size": 4096,
00:17:12.144    "num_blocks": 8192,
00:17:12.144    "uuid": "4a1a0c2c-2603-5d36-b8d1-115c5e378b53",
00:17:12.144    "assigned_rate_limits": {
00:17:12.144      "rw_ios_per_sec": 0,
00:17:12.144      "rw_mbytes_per_sec": 0,
00:17:12.144      "r_mbytes_per_sec": 0,
00:17:12.144      "w_mbytes_per_sec": 0
00:17:12.144    },
00:17:12.144    "claimed": true,
00:17:12.144    "claim_type": "exclusive_write",
00:17:12.144    "zoned": false,
00:17:12.144    "supported_io_types": {
00:17:12.144      "read": true,
00:17:12.144      "write": true,
00:17:12.144      "unmap": true,
00:17:12.144      "flush": true,
00:17:12.144      "reset": true,
00:17:12.144      "nvme_admin": false,
00:17:12.144      "nvme_io": false,
00:17:12.144      "nvme_io_md": false,
00:17:12.144      "write_zeroes": true,
00:17:12.144      "zcopy": false,
00:17:12.144      "get_zone_info": false,
00:17:12.144      "zone_management": false,
00:17:12.144      "zone_append": false,
00:17:12.144      "compare": false,
00:17:12.144      "compare_and_write": false,
00:17:12.144      "abort": false,
00:17:12.144      "seek_hole": false,
00:17:12.144      "seek_data": false,
00:17:12.144      "copy": false,
00:17:12.144      "nvme_iov_md": false
00:17:12.144    },
00:17:12.144    "memory_domains": [
00:17:12.144      {
00:17:12.144        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:12.144        "dma_device_type": 2
00:17:12.144      }
00:17:12.144    ],
00:17:12.144    "driver_specific": {
00:17:12.144      "crypto": {
00:17:12.144        "base_bdev_name": "5fb6b73b-e39e-4dd1-a58d-b11b4714d3d60n1",
00:17:12.144        "name": "d1113dc9-5a2a-4341-bf86-2975b5d8433a",
00:17:12.144        "key_name": "d1113dc9-5a2a-4341-bf86-2975b5d8433a_AES_CBC"
00:17:12.144      }
00:17:12.144    }
00:17:12.144  }'
00:17:12.144    10:14:07 sma.sma_crypto -- sma/crypto.sh@212 -- # jq -r .driver_specific.crypto.key_name
00:17:12.144   10:14:07 sma.sma_crypto -- sma/crypto.sh@212 -- # key_name=d1113dc9-5a2a-4341-bf86-2975b5d8433a_AES_CBC
00:17:12.144    10:14:07 sma.sma_crypto -- sma/crypto.sh@213 -- # rpc_cmd accel_crypto_keys_get -k d1113dc9-5a2a-4341-bf86-2975b5d8433a_AES_CBC
00:17:12.144    10:14:07 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:12.144    10:14:07 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:12.144    10:14:07 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:12.144   10:14:07 sma.sma_crypto -- sma/crypto.sh@213 -- # key_obj='[
00:17:12.144  {
00:17:12.144  "name": "d1113dc9-5a2a-4341-bf86-2975b5d8433a_AES_CBC",
00:17:12.144  "cipher": "AES_CBC",
00:17:12.144  "key": "1234567890abcdef1234567890abcdef"
00:17:12.144  }
00:17:12.144  ]'
00:17:12.144    10:14:07 sma.sma_crypto -- sma/crypto.sh@214 -- # jq -r '.[0].key'
00:17:12.144   10:14:07 sma.sma_crypto -- sma/crypto.sh@214 -- # [[ 1234567890abcdef1234567890abcdef == \1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f\1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f ]]
00:17:12.144    10:14:07 sma.sma_crypto -- sma/crypto.sh@215 -- # jq -r '.[0].cipher'
00:17:12.144   10:14:07 sma.sma_crypto -- sma/crypto.sh@215 -- # [[ AES_CBC == \A\E\S\_\C\B\C ]]
00:17:12.144   10:14:07 sma.sma_crypto -- sma/crypto.sh@218 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 334e37fb-3ac1-4a09-afeb-bfc37ef34708 AES_CBC 1234567890abcdef1234567890abcdef
00:17:12.144   10:14:07 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:12.144   10:14:07 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:17:12.144   10:14:07 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:12.144    10:14:07 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 334e37fb-3ac1-4a09-afeb-bfc37ef34708 AES_CBC 1234567890abcdef1234567890abcdef
00:17:12.144    10:14:07 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=334e37fb-3ac1-4a09-afeb-bfc37ef34708 cipher=AES_CBC key=1234567890abcdef1234567890abcdef key2= config
00:17:12.144    10:14:07 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:17:12.144     10:14:07 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:17:12.144      10:14:07 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 334e37fb-3ac1-4a09-afeb-bfc37ef34708
00:17:12.144      10:14:07 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:12.144    10:14:07 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "M043+zrBSgmv67/DfvNHCA==",
00:17:12.144  "nvmf": {
00:17:12.144    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:17:12.144    "discovery": {
00:17:12.144      "discovery_endpoints": [
00:17:12.144        {
00:17:12.144          "trtype": "tcp",
00:17:12.144          "traddr": "127.0.0.1",
00:17:12.144          "trsvcid": "8009"
00:17:12.144        }
00:17:12.144      ]
00:17:12.144    }
00:17:12.144  }'
00:17:12.144    10:14:07 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:17:12.144    10:14:07 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:17:12.144    10:14:07 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_CBC ]]
00:17:12.144    10:14:07 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:17:12.144     10:14:07 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_CBC
00:17:12.144     10:14:07 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:17:12.144     10:14:07 sma.sma_crypto -- sma/common.sh@28 -- # echo 0
00:17:12.144    10:14:07 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:17:12.144     10:14:07 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:17:12.144     10:14:07 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:17:12.144      10:14:07 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:17:12.144    10:14:07 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:17:12.144     10:14:07 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:17:12.144    10:14:07 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:17:12.144    "cipher": 0,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:17:12.144  }'
00:17:12.144    10:14:07 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:17:12.144    10:14:07 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:17:12.403  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:12.403  I0000 00:00:1732094047.479375 1831574 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:12.403  I0000 00:00:1732094047.481204 1831574 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:12.403  I0000 00:00:1732094047.482986 1831593 subchannel.cc:806] subchannel 0x564e87728180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x564e876351c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x564e876d9460, grpc.internal.client_channel_call_destination=0x7f17ca5c4390, grpc.internal.event_engine=0x564e8769b440, grpc.internal.security_connector=0x564e8770fd00, grpc.internal.subchannel_pool=0x564e8770fc10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x564e873582f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:14:07.482506898+01:00"}), backing off for 999 ms
00:17:12.661  {}
00:17:12.661    10:14:07 sma.sma_crypto -- sma/crypto.sh@221 -- # rpc_cmd bdev_nvme_get_discovery_info
00:17:12.661    10:14:07 sma.sma_crypto -- sma/crypto.sh@221 -- # jq -r '. | length'
00:17:12.661    10:14:07 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:12.661    10:14:07 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:12.661    10:14:07 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:12.661   10:14:07 sma.sma_crypto -- sma/crypto.sh@221 -- # [[ 1 -eq 1 ]]
00:17:12.661    10:14:07 sma.sma_crypto -- sma/crypto.sh@222 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:12.661    10:14:07 sma.sma_crypto -- sma/crypto.sh@222 -- # jq -r '.[0].namespaces | length'
00:17:12.661    10:14:07 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:12.661    10:14:07 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:12.661    10:14:07 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:12.661   10:14:07 sma.sma_crypto -- sma/crypto.sh@222 -- # [[ 1 -eq 1 ]]
00:17:12.661   10:14:07 sma.sma_crypto -- sma/crypto.sh@223 -- # verify_crypto_volume nqn.2016-06.io.spdk:cnode0 334e37fb-3ac1-4a09-afeb-bfc37ef34708
00:17:12.661   10:14:07 sma.sma_crypto -- sma/crypto.sh@132 -- # local nqn=nqn.2016-06.io.spdk:cnode0 uuid=334e37fb-3ac1-4a09-afeb-bfc37ef34708 ns ns_bdev
00:17:12.661    10:14:07 sma.sma_crypto -- sma/crypto.sh@134 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:12.661    10:14:07 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:12.661    10:14:07 sma.sma_crypto -- sma/crypto.sh@134 -- # jq -r '.[0].namespaces[0]'
00:17:12.661    10:14:07 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:12.661    10:14:07 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:12.661   10:14:07 sma.sma_crypto -- sma/crypto.sh@134 -- # ns='{
00:17:12.661    "nsid": 1,
00:17:12.661    "bdev_name": "d1113dc9-5a2a-4341-bf86-2975b5d8433a",
00:17:12.661    "name": "d1113dc9-5a2a-4341-bf86-2975b5d8433a",
00:17:12.661    "nguid": "334E37FB3AC14A09AFEBBFC37EF34708",
00:17:12.661    "uuid": "334e37fb-3ac1-4a09-afeb-bfc37ef34708"
00:17:12.661  }'
00:17:12.661    10:14:07 sma.sma_crypto -- sma/crypto.sh@135 -- # jq -r .name
00:17:12.661   10:14:07 sma.sma_crypto -- sma/crypto.sh@135 -- # ns_bdev=d1113dc9-5a2a-4341-bf86-2975b5d8433a
00:17:12.661    10:14:07 sma.sma_crypto -- sma/crypto.sh@138 -- # rpc_cmd bdev_get_bdevs -b d1113dc9-5a2a-4341-bf86-2975b5d8433a
00:17:12.661    10:14:07 sma.sma_crypto -- sma/crypto.sh@138 -- # jq -r '.[0].product_name'
00:17:12.661    10:14:07 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:12.661    10:14:07 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:12.661    10:14:07 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:12.661   10:14:07 sma.sma_crypto -- sma/crypto.sh@138 -- # [[ crypto == crypto ]]
00:17:12.661    10:14:07 sma.sma_crypto -- sma/crypto.sh@139 -- # rpc_cmd bdev_get_bdevs
00:17:12.661    10:14:07 sma.sma_crypto -- sma/crypto.sh@139 -- # jq -r '[.[] | select(.product_name == "crypto")] | length'
00:17:12.661    10:14:07 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:12.661    10:14:07 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:12.661    10:14:07 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:12.661   10:14:07 sma.sma_crypto -- sma/crypto.sh@139 -- # [[ 1 -eq 1 ]]
00:17:12.661    10:14:07 sma.sma_crypto -- sma/crypto.sh@141 -- # jq -r .uuid
00:17:12.958   10:14:07 sma.sma_crypto -- sma/crypto.sh@141 -- # [[ 334e37fb-3ac1-4a09-afeb-bfc37ef34708 == \3\3\4\e\3\7\f\b\-\3\a\c\1\-\4\a\0\9\-\a\f\e\b\-\b\f\c\3\7\e\f\3\4\7\0\8 ]]
00:17:12.958    10:14:07 sma.sma_crypto -- sma/crypto.sh@142 -- # jq -r .nguid
00:17:12.958    10:14:07 sma.sma_crypto -- sma/crypto.sh@142 -- # uuid2nguid 334e37fb-3ac1-4a09-afeb-bfc37ef34708
00:17:12.958    10:14:07 sma.sma_crypto -- sma/common.sh@40 -- # local uuid=334E37FB-3AC1-4A09-AFEB-BFC37EF34708
00:17:12.958    10:14:07 sma.sma_crypto -- sma/common.sh@41 -- # echo 334E37FB3AC14A09AFEBBFC37EF34708
00:17:12.958   10:14:07 sma.sma_crypto -- sma/crypto.sh@142 -- # [[ 334E37FB3AC14A09AFEBBFC37EF34708 == \3\3\4\E\3\7\F\B\3\A\C\1\4\A\0\9\A\F\E\B\B\F\C\3\7\E\F\3\4\7\0\8 ]]
00:17:12.958    10:14:07 sma.sma_crypto -- sma/crypto.sh@224 -- # rpc_cmd bdev_get_bdevs
00:17:12.958    10:14:07 sma.sma_crypto -- sma/crypto.sh@224 -- # jq -r '.[] | select(.product_name == "crypto")'
00:17:12.958    10:14:07 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:12.958    10:14:07 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:12.958    10:14:07 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:12.958   10:14:07 sma.sma_crypto -- sma/crypto.sh@224 -- # crypto_bdev2='{
00:17:12.958    "name": "d1113dc9-5a2a-4341-bf86-2975b5d8433a",
00:17:12.958    "aliases": [
00:17:12.958      "4a1a0c2c-2603-5d36-b8d1-115c5e378b53"
00:17:12.958    ],
00:17:12.958    "product_name": "crypto",
00:17:12.958    "block_size": 4096,
00:17:12.958    "num_blocks": 8192,
00:17:12.958    "uuid": "4a1a0c2c-2603-5d36-b8d1-115c5e378b53",
00:17:12.958    "assigned_rate_limits": {
00:17:12.958      "rw_ios_per_sec": 0,
00:17:12.958      "rw_mbytes_per_sec": 0,
00:17:12.958      "r_mbytes_per_sec": 0,
00:17:12.958      "w_mbytes_per_sec": 0
00:17:12.958    },
00:17:12.958    "claimed": true,
00:17:12.958    "claim_type": "exclusive_write",
00:17:12.958    "zoned": false,
00:17:12.958    "supported_io_types": {
00:17:12.958      "read": true,
00:17:12.958      "write": true,
00:17:12.958      "unmap": true,
00:17:12.958      "flush": true,
00:17:12.958      "reset": true,
00:17:12.958      "nvme_admin": false,
00:17:12.958      "nvme_io": false,
00:17:12.958      "nvme_io_md": false,
00:17:12.958      "write_zeroes": true,
00:17:12.958      "zcopy": false,
00:17:12.958      "get_zone_info": false,
00:17:12.958      "zone_management": false,
00:17:12.958      "zone_append": false,
00:17:12.958      "compare": false,
00:17:12.958      "compare_and_write": false,
00:17:12.958      "abort": false,
00:17:12.958      "seek_hole": false,
00:17:12.958      "seek_data": false,
00:17:12.958      "copy": false,
00:17:12.958      "nvme_iov_md": false
00:17:12.958    },
00:17:12.958    "memory_domains": [
00:17:12.958      {
00:17:12.958        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:12.958        "dma_device_type": 2
00:17:12.958      }
00:17:12.958    ],
00:17:12.958    "driver_specific": {
00:17:12.958      "crypto": {
00:17:12.958        "base_bdev_name": "5fb6b73b-e39e-4dd1-a58d-b11b4714d3d60n1",
00:17:12.958        "name": "d1113dc9-5a2a-4341-bf86-2975b5d8433a",
00:17:12.958        "key_name": "d1113dc9-5a2a-4341-bf86-2975b5d8433a_AES_CBC"
00:17:12.958      }
00:17:12.958    }
00:17:12.958  }'
00:17:12.958    10:14:07 sma.sma_crypto -- sma/crypto.sh@225 -- # jq -r .name
00:17:12.958    10:14:07 sma.sma_crypto -- sma/crypto.sh@225 -- # jq -r .name
00:17:12.958   10:14:07 sma.sma_crypto -- sma/crypto.sh@225 -- # [[ d1113dc9-5a2a-4341-bf86-2975b5d8433a == d1113dc9-5a2a-4341-bf86-2975b5d8433a ]]
00:17:12.958    10:14:07 sma.sma_crypto -- sma/crypto.sh@226 -- # jq -r .driver_specific.crypto.key_name
00:17:12.958   10:14:07 sma.sma_crypto -- sma/crypto.sh@226 -- # key_name=d1113dc9-5a2a-4341-bf86-2975b5d8433a_AES_CBC
00:17:12.959    10:14:07 sma.sma_crypto -- sma/crypto.sh@227 -- # rpc_cmd accel_crypto_keys_get -k d1113dc9-5a2a-4341-bf86-2975b5d8433a_AES_CBC
00:17:12.959    10:14:07 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:12.959    10:14:07 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:12.959    10:14:08 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:12.959   10:14:08 sma.sma_crypto -- sma/crypto.sh@227 -- # key_obj='[
00:17:12.959  {
00:17:12.959  "name": "d1113dc9-5a2a-4341-bf86-2975b5d8433a_AES_CBC",
00:17:12.959  "cipher": "AES_CBC",
00:17:12.959  "key": "1234567890abcdef1234567890abcdef"
00:17:12.959  }
00:17:12.959  ]'
00:17:12.959    10:14:08 sma.sma_crypto -- sma/crypto.sh@228 -- # jq -r '.[0].key'
00:17:12.959   10:14:08 sma.sma_crypto -- sma/crypto.sh@228 -- # [[ 1234567890abcdef1234567890abcdef == \1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f\1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f ]]
00:17:12.959    10:14:08 sma.sma_crypto -- sma/crypto.sh@229 -- # jq -r '.[0].cipher'
00:17:12.959   10:14:08 sma.sma_crypto -- sma/crypto.sh@229 -- # [[ AES_CBC == \A\E\S\_\C\B\C ]]
00:17:12.959   10:14:08 sma.sma_crypto -- sma/crypto.sh@232 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 334e37fb-3ac1-4a09-afeb-bfc37ef34708 AES_XTS 1234567890abcdef1234567890abcdef
00:17:12.959   10:14:08 sma.sma_crypto -- common/autotest_common.sh@652 -- # local es=0
00:17:12.959   10:14:08 sma.sma_crypto -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 334e37fb-3ac1-4a09-afeb-bfc37ef34708 AES_XTS 1234567890abcdef1234567890abcdef
00:17:12.959   10:14:08 sma.sma_crypto -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:17:12.959   10:14:08 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:12.959    10:14:08 sma.sma_crypto -- common/autotest_common.sh@644 -- # type -t attach_volume
00:17:12.959   10:14:08 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:12.959   10:14:08 sma.sma_crypto -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 334e37fb-3ac1-4a09-afeb-bfc37ef34708 AES_XTS 1234567890abcdef1234567890abcdef
00:17:12.959   10:14:08 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:12.959   10:14:08 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:17:12.959   10:14:08 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:12.959    10:14:08 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 334e37fb-3ac1-4a09-afeb-bfc37ef34708 AES_XTS 1234567890abcdef1234567890abcdef
00:17:13.216    10:14:08 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=334e37fb-3ac1-4a09-afeb-bfc37ef34708 cipher=AES_XTS key=1234567890abcdef1234567890abcdef key2= config
00:17:13.216    10:14:08 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:17:13.216     10:14:08 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:17:13.216      10:14:08 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 334e37fb-3ac1-4a09-afeb-bfc37ef34708
00:17:13.216      10:14:08 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:13.216    10:14:08 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "M043+zrBSgmv67/DfvNHCA==",
00:17:13.216  "nvmf": {
00:17:13.216    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:17:13.216    "discovery": {
00:17:13.216      "discovery_endpoints": [
00:17:13.216        {
00:17:13.216          "trtype": "tcp",
00:17:13.216          "traddr": "127.0.0.1",
00:17:13.216          "trsvcid": "8009"
00:17:13.216        }
00:17:13.216      ]
00:17:13.216    }
00:17:13.216  }'
00:17:13.216    10:14:08 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:17:13.216    10:14:08 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:17:13.216    10:14:08 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_XTS ]]
00:17:13.216    10:14:08 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:17:13.216     10:14:08 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_XTS
00:17:13.216     10:14:08 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:17:13.216     10:14:08 sma.sma_crypto -- sma/common.sh@29 -- # echo 1
00:17:13.216    10:14:08 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:17:13.216     10:14:08 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:17:13.216     10:14:08 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:17:13.216      10:14:08 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:17:13.216    10:14:08 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:17:13.216     10:14:08 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:17:13.216    10:14:08 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:17:13.216    "cipher": 1,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:17:13.216  }'
00:17:13.216    10:14:08 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:17:13.216    10:14:08 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:17:13.474  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:13.474  I0000 00:00:1732094048.371049 1831772 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:13.474  I0000 00:00:1732094048.372939 1831772 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:13.474  I0000 00:00:1732094048.374735 1831796 subchannel.cc:806] subchannel 0x55dc13206180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55dc131131c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55dc131b7460, grpc.internal.client_channel_call_destination=0x7fecdd8b2390, grpc.internal.event_engine=0x55dc13179440, grpc.internal.security_connector=0x55dc131edd00, grpc.internal.subchannel_pool=0x55dc131edc10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55dc12e362f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:14:08.374206725+01:00"}), backing off for 1000 ms
00:17:13.474  Traceback (most recent call last):
00:17:13.474    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:13.474      main(sys.argv[1:])
00:17:13.474    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:13.474      result = client.call(request['method'], request.get('params', {}))
00:17:13.474               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:13.474    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:13.474      response = func(request=json_format.ParseDict(params, input()))
00:17:13.474                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:13.474    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:13.474      return _end_unary_response_blocking(state, call, False, None)
00:17:13.474             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:13.474    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:13.474      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:13.474      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:13.474  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:13.474  	status = StatusCode.INVALID_ARGUMENT
00:17:13.474  	details = "Invalid volume crypto configuration: bad cipher"
00:17:13.474  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-11-20T10:14:08.392090067+01:00", grpc_status:3, grpc_message:"Invalid volume crypto configuration: bad cipher"}"
00:17:13.474  >
00:17:13.474   10:14:08 sma.sma_crypto -- common/autotest_common.sh@655 -- # es=1
00:17:13.474   10:14:08 sma.sma_crypto -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:13.474   10:14:08 sma.sma_crypto -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:13.474   10:14:08 sma.sma_crypto -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:13.474   10:14:08 sma.sma_crypto -- sma/crypto.sh@234 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 334e37fb-3ac1-4a09-afeb-bfc37ef34708 AES_CBC deadbeefcafebabefeedbeefbabecafe
00:17:13.474   10:14:08 sma.sma_crypto -- common/autotest_common.sh@652 -- # local es=0
00:17:13.474   10:14:08 sma.sma_crypto -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 334e37fb-3ac1-4a09-afeb-bfc37ef34708 AES_CBC deadbeefcafebabefeedbeefbabecafe
00:17:13.474   10:14:08 sma.sma_crypto -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:17:13.474   10:14:08 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:13.474    10:14:08 sma.sma_crypto -- common/autotest_common.sh@644 -- # type -t attach_volume
00:17:13.474   10:14:08 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:13.474   10:14:08 sma.sma_crypto -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 334e37fb-3ac1-4a09-afeb-bfc37ef34708 AES_CBC deadbeefcafebabefeedbeefbabecafe
00:17:13.474   10:14:08 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:13.474   10:14:08 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:17:13.474   10:14:08 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:13.474    10:14:08 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 334e37fb-3ac1-4a09-afeb-bfc37ef34708 AES_CBC deadbeefcafebabefeedbeefbabecafe
00:17:13.474    10:14:08 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=334e37fb-3ac1-4a09-afeb-bfc37ef34708 cipher=AES_CBC key=deadbeefcafebabefeedbeefbabecafe key2= config
00:17:13.474    10:14:08 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:17:13.474     10:14:08 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:17:13.474      10:14:08 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 334e37fb-3ac1-4a09-afeb-bfc37ef34708
00:17:13.474      10:14:08 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:13.474    10:14:08 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "M043+zrBSgmv67/DfvNHCA==",
00:17:13.474  "nvmf": {
00:17:13.474    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:17:13.474    "discovery": {
00:17:13.474      "discovery_endpoints": [
00:17:13.474        {
00:17:13.474          "trtype": "tcp",
00:17:13.474          "traddr": "127.0.0.1",
00:17:13.474          "trsvcid": "8009"
00:17:13.474        }
00:17:13.474      ]
00:17:13.474    }
00:17:13.474  }'
00:17:13.474    10:14:08 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:17:13.474    10:14:08 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:17:13.474    10:14:08 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_CBC ]]
00:17:13.474    10:14:08 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:17:13.474     10:14:08 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_CBC
00:17:13.474     10:14:08 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:17:13.474     10:14:08 sma.sma_crypto -- sma/common.sh@28 -- # echo 0
00:17:13.474    10:14:08 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:17:13.474     10:14:08 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key deadbeefcafebabefeedbeefbabecafe
00:17:13.474     10:14:08 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:17:13.474      10:14:08 sma.sma_crypto -- sma/common.sh@35 -- # echo -n deadbeefcafebabefeedbeefbabecafe
00:17:13.474    10:14:08 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:17:13.474     10:14:08 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:17:13.474    10:14:08 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:17:13.475    "cipher": 0,"key": "ZGVhZGJlZWZjYWZlYmFiZWZlZWRiZWVmYmFiZWNhZmU="
00:17:13.475  }'
00:17:13.475    10:14:08 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:17:13.475    10:14:08 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:17:13.733  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:13.733  I0000 00:00:1732094048.696584 1831817 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:13.733  I0000 00:00:1732094048.698305 1831817 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:13.733  I0000 00:00:1732094048.700009 1831831 subchannel.cc:806] subchannel 0x561c84680180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x561c8458d1c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x561c84631460, grpc.internal.client_channel_call_destination=0x7f5714123390, grpc.internal.event_engine=0x561c845f3440, grpc.internal.security_connector=0x561c84667d00, grpc.internal.subchannel_pool=0x561c84667c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x561c842b02f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:14:08.699541763+01:00"}), backing off for 999 ms
00:17:13.733  Traceback (most recent call last):
00:17:13.733    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:13.733      main(sys.argv[1:])
00:17:13.733    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:13.733      result = client.call(request['method'], request.get('params', {}))
00:17:13.733               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:13.733    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:13.733      response = func(request=json_format.ParseDict(params, input()))
00:17:13.733                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:13.733    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:13.733      return _end_unary_response_blocking(state, call, False, None)
00:17:13.733             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:13.733    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:13.733      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:13.733      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:13.733  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:13.733  	status = StatusCode.INVALID_ARGUMENT
00:17:13.733  	details = "Invalid volume crypto configuration: bad key"
00:17:13.733  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Invalid volume crypto configuration: bad key", grpc_status:3, created_time:"2024-11-20T10:14:08.717416588+01:00"}"
00:17:13.733  >
00:17:13.733   10:14:08 sma.sma_crypto -- common/autotest_common.sh@655 -- # es=1
00:17:13.733   10:14:08 sma.sma_crypto -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:13.733   10:14:08 sma.sma_crypto -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:13.733   10:14:08 sma.sma_crypto -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:13.733   10:14:08 sma.sma_crypto -- sma/crypto.sh@236 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 334e37fb-3ac1-4a09-afeb-bfc37ef34708 AES_CBC 1234567890abcdef1234567890abcdef deadbeefcafebabefeedbeefbabecafe
00:17:13.733   10:14:08 sma.sma_crypto -- common/autotest_common.sh@652 -- # local es=0
00:17:13.733   10:14:08 sma.sma_crypto -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 334e37fb-3ac1-4a09-afeb-bfc37ef34708 AES_CBC 1234567890abcdef1234567890abcdef deadbeefcafebabefeedbeefbabecafe
00:17:13.733   10:14:08 sma.sma_crypto -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:17:13.733   10:14:08 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:13.733    10:14:08 sma.sma_crypto -- common/autotest_common.sh@644 -- # type -t attach_volume
00:17:13.733   10:14:08 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:13.733   10:14:08 sma.sma_crypto -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 334e37fb-3ac1-4a09-afeb-bfc37ef34708 AES_CBC 1234567890abcdef1234567890abcdef deadbeefcafebabefeedbeefbabecafe
00:17:13.733   10:14:08 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:13.733   10:14:08 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:17:13.733   10:14:08 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:13.733    10:14:08 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 334e37fb-3ac1-4a09-afeb-bfc37ef34708 AES_CBC 1234567890abcdef1234567890abcdef deadbeefcafebabefeedbeefbabecafe
00:17:13.733    10:14:08 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=334e37fb-3ac1-4a09-afeb-bfc37ef34708 cipher=AES_CBC key=1234567890abcdef1234567890abcdef key2=deadbeefcafebabefeedbeefbabecafe config
00:17:13.733    10:14:08 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:17:13.733     10:14:08 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:17:13.733      10:14:08 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 334e37fb-3ac1-4a09-afeb-bfc37ef34708
00:17:13.733      10:14:08 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:13.733    10:14:08 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "M043+zrBSgmv67/DfvNHCA==",
00:17:13.733  "nvmf": {
00:17:13.733    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:17:13.733    "discovery": {
00:17:13.733      "discovery_endpoints": [
00:17:13.733        {
00:17:13.733          "trtype": "tcp",
00:17:13.733          "traddr": "127.0.0.1",
00:17:13.733          "trsvcid": "8009"
00:17:13.733        }
00:17:13.733      ]
00:17:13.733    }
00:17:13.733  }'
00:17:13.733    10:14:08 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:17:13.733    10:14:08 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:17:13.733    10:14:08 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_CBC ]]
00:17:13.733    10:14:08 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:17:13.733     10:14:08 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_CBC
00:17:13.733     10:14:08 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:17:13.733     10:14:08 sma.sma_crypto -- sma/common.sh@28 -- # echo 0
00:17:13.733    10:14:08 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:17:13.733     10:14:08 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:17:13.733     10:14:08 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:17:13.733      10:14:08 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:17:13.733    10:14:08 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n deadbeefcafebabefeedbeefbabecafe ]]
00:17:13.733    10:14:08 sma.sma_crypto -- sma/crypto.sh@55 -- # crypto+=("\"key2\": \"$(format_key $key2)\"")
00:17:13.733     10:14:08 sma.sma_crypto -- sma/crypto.sh@55 -- # format_key deadbeefcafebabefeedbeefbabecafe
00:17:13.733     10:14:08 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:17:13.733      10:14:08 sma.sma_crypto -- sma/common.sh@35 -- # echo -n deadbeefcafebabefeedbeefbabecafe
00:17:13.733     10:14:08 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:17:13.733    10:14:08 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:17:13.733    "cipher": 0,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY=","key2": "ZGVhZGJlZWZjYWZlYmFiZWZlZWRiZWVmYmFiZWNhZmU="
00:17:13.733  }'
00:17:13.733    10:14:08 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:17:13.733    10:14:08 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:17:13.991  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:13.991  I0000 00:00:1732094049.054122 1831852 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:13.991  I0000 00:00:1732094049.055850 1831852 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:13.991  I0000 00:00:1732094049.057646 1831986 subchannel.cc:806] subchannel 0x55a6916ab180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55a6915b81c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55a69165c460, grpc.internal.client_channel_call_destination=0x7fe29c2b6390, grpc.internal.event_engine=0x55a6914c6ad0, grpc.internal.security_connector=0x55a6914e9600, grpc.internal.subchannel_pool=0x55a691584770, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55a691608ea0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:14:09.05709972+01:00"}), backing off for 1000 ms
00:17:13.991  Traceback (most recent call last):
00:17:13.991    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:13.991      main(sys.argv[1:])
00:17:13.991    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:13.991      result = client.call(request['method'], request.get('params', {}))
00:17:13.991               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:13.991    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:13.991      response = func(request=json_format.ParseDict(params, input()))
00:17:13.991                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:13.991    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:13.991      return _end_unary_response_blocking(state, call, False, None)
00:17:13.991             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:13.991    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:13.991      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:13.991      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:13.991  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:13.991  	status = StatusCode.INVALID_ARGUMENT
00:17:13.991  	details = "Invalid volume crypto configuration: bad key2"
00:17:13.991  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Invalid volume crypto configuration: bad key2", grpc_status:3, created_time:"2024-11-20T10:14:09.074634624+01:00"}"
00:17:13.991  >
00:17:13.991   10:14:09 sma.sma_crypto -- common/autotest_common.sh@655 -- # es=1
00:17:13.991   10:14:09 sma.sma_crypto -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:13.991   10:14:09 sma.sma_crypto -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:13.991   10:14:09 sma.sma_crypto -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:13.991   10:14:09 sma.sma_crypto -- sma/crypto.sh@238 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 334e37fb-3ac1-4a09-afeb-bfc37ef34708 8 1234567890abcdef1234567890abcdef
00:17:13.991   10:14:09 sma.sma_crypto -- common/autotest_common.sh@652 -- # local es=0
00:17:13.991   10:14:09 sma.sma_crypto -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 334e37fb-3ac1-4a09-afeb-bfc37ef34708 8 1234567890abcdef1234567890abcdef
00:17:13.991   10:14:09 sma.sma_crypto -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:17:13.991   10:14:09 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:13.991    10:14:09 sma.sma_crypto -- common/autotest_common.sh@644 -- # type -t attach_volume
00:17:13.991   10:14:09 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:13.991   10:14:09 sma.sma_crypto -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 334e37fb-3ac1-4a09-afeb-bfc37ef34708 8 1234567890abcdef1234567890abcdef
00:17:13.991   10:14:09 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:13.991   10:14:09 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:17:13.991   10:14:09 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:13.991    10:14:09 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 334e37fb-3ac1-4a09-afeb-bfc37ef34708 8 1234567890abcdef1234567890abcdef
00:17:13.991    10:14:09 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=334e37fb-3ac1-4a09-afeb-bfc37ef34708 cipher=8 key=1234567890abcdef1234567890abcdef key2= config
00:17:13.991    10:14:09 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:17:13.991     10:14:09 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:17:13.991      10:14:09 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 334e37fb-3ac1-4a09-afeb-bfc37ef34708
00:17:13.991      10:14:09 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:14.249    10:14:09 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "M043+zrBSgmv67/DfvNHCA==",
00:17:14.249  "nvmf": {
00:17:14.249    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:17:14.249    "discovery": {
00:17:14.249      "discovery_endpoints": [
00:17:14.249        {
00:17:14.249          "trtype": "tcp",
00:17:14.249          "traddr": "127.0.0.1",
00:17:14.249          "trsvcid": "8009"
00:17:14.249        }
00:17:14.249      ]
00:17:14.249    }
00:17:14.249  }'
00:17:14.249    10:14:09 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:17:14.249    10:14:09 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:17:14.249    10:14:09 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n 8 ]]
00:17:14.249    10:14:09 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:17:14.249     10:14:09 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher 8
00:17:14.249     10:14:09 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:17:14.249     10:14:09 sma.sma_crypto -- sma/common.sh@30 -- # echo 8
00:17:14.249    10:14:09 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:17:14.249     10:14:09 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:17:14.249     10:14:09 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:17:14.249      10:14:09 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:17:14.249    10:14:09 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:17:14.249     10:14:09 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:17:14.250    10:14:09 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:17:14.250    "cipher": 8,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:17:14.250  }'
00:17:14.250    10:14:09 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:17:14.250    10:14:09 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:17:14.507  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:14.507  I0000 00:00:1732094049.402159 1832007 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:14.507  I0000 00:00:1732094049.403983 1832007 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:14.507  I0000 00:00:1732094049.405829 1832031 subchannel.cc:806] subchannel 0x55c5d3b62180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55c5d3a6f1c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55c5d3b13460, grpc.internal.client_channel_call_destination=0x7ffb52475390, grpc.internal.event_engine=0x55c5d3ad5440, grpc.internal.security_connector=0x55c5d3b49d00, grpc.internal.subchannel_pool=0x55c5d3b49c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55c5d37922f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:14:09.405291251+01:00"}), backing off for 1000 ms
00:17:14.507  Traceback (most recent call last):
00:17:14.507    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:14.507      main(sys.argv[1:])
00:17:14.507    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:14.507      result = client.call(request['method'], request.get('params', {}))
00:17:14.507               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:14.507    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:14.507      response = func(request=json_format.ParseDict(params, input()))
00:17:14.507                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:14.507    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:14.507      return _end_unary_response_blocking(state, call, False, None)
00:17:14.507             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:14.507    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:14.507      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:14.507      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:14.507  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:14.507  	status = StatusCode.INVALID_ARGUMENT
00:17:14.507  	details = "Invalid volume crypto configuration: bad cipher"
00:17:14.507  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Invalid volume crypto configuration: bad cipher", grpc_status:3, created_time:"2024-11-20T10:14:09.420018546+01:00"}"
00:17:14.507  >
00:17:14.507   10:14:09 sma.sma_crypto -- common/autotest_common.sh@655 -- # es=1
00:17:14.507   10:14:09 sma.sma_crypto -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:14.507   10:14:09 sma.sma_crypto -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:14.507   10:14:09 sma.sma_crypto -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:14.507   10:14:09 sma.sma_crypto -- sma/crypto.sh@241 -- # verify_crypto_volume nqn.2016-06.io.spdk:cnode0 334e37fb-3ac1-4a09-afeb-bfc37ef34708
00:17:14.507   10:14:09 sma.sma_crypto -- sma/crypto.sh@132 -- # local nqn=nqn.2016-06.io.spdk:cnode0 uuid=334e37fb-3ac1-4a09-afeb-bfc37ef34708 ns ns_bdev
00:17:14.507    10:14:09 sma.sma_crypto -- sma/crypto.sh@134 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:14.507    10:14:09 sma.sma_crypto -- sma/crypto.sh@134 -- # jq -r '.[0].namespaces[0]'
00:17:14.507    10:14:09 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:14.507    10:14:09 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:14.507    10:14:09 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:14.507   10:14:09 sma.sma_crypto -- sma/crypto.sh@134 -- # ns='{
00:17:14.507    "nsid": 1,
00:17:14.507    "bdev_name": "d1113dc9-5a2a-4341-bf86-2975b5d8433a",
00:17:14.507    "name": "d1113dc9-5a2a-4341-bf86-2975b5d8433a",
00:17:14.507    "nguid": "334E37FB3AC14A09AFEBBFC37EF34708",
00:17:14.507    "uuid": "334e37fb-3ac1-4a09-afeb-bfc37ef34708"
00:17:14.507  }'
00:17:14.507    10:14:09 sma.sma_crypto -- sma/crypto.sh@135 -- # jq -r .name
00:17:14.507   10:14:09 sma.sma_crypto -- sma/crypto.sh@135 -- # ns_bdev=d1113dc9-5a2a-4341-bf86-2975b5d8433a
00:17:14.507    10:14:09 sma.sma_crypto -- sma/crypto.sh@138 -- # rpc_cmd bdev_get_bdevs -b d1113dc9-5a2a-4341-bf86-2975b5d8433a
00:17:14.507    10:14:09 sma.sma_crypto -- sma/crypto.sh@138 -- # jq -r '.[0].product_name'
00:17:14.507    10:14:09 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:14.507    10:14:09 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:14.507    10:14:09 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:14.507   10:14:09 sma.sma_crypto -- sma/crypto.sh@138 -- # [[ crypto == crypto ]]
00:17:14.507    10:14:09 sma.sma_crypto -- sma/crypto.sh@139 -- # rpc_cmd bdev_get_bdevs
00:17:14.507    10:14:09 sma.sma_crypto -- sma/crypto.sh@139 -- # jq -r '[.[] | select(.product_name == "crypto")] | length'
00:17:14.507    10:14:09 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:14.507    10:14:09 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:14.507    10:14:09 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:14.507   10:14:09 sma.sma_crypto -- sma/crypto.sh@139 -- # [[ 1 -eq 1 ]]
00:17:14.507    10:14:09 sma.sma_crypto -- sma/crypto.sh@141 -- # jq -r .uuid
00:17:14.765   10:14:09 sma.sma_crypto -- sma/crypto.sh@141 -- # [[ 334e37fb-3ac1-4a09-afeb-bfc37ef34708 == \3\3\4\e\3\7\f\b\-\3\a\c\1\-\4\a\0\9\-\a\f\e\b\-\b\f\c\3\7\e\f\3\4\7\0\8 ]]
00:17:14.765    10:14:09 sma.sma_crypto -- sma/crypto.sh@142 -- # jq -r .nguid
00:17:14.765    10:14:09 sma.sma_crypto -- sma/crypto.sh@142 -- # uuid2nguid 334e37fb-3ac1-4a09-afeb-bfc37ef34708
00:17:14.765    10:14:09 sma.sma_crypto -- sma/common.sh@40 -- # local uuid=334E37FB-3AC1-4A09-AFEB-BFC37EF34708
00:17:14.765    10:14:09 sma.sma_crypto -- sma/common.sh@41 -- # echo 334E37FB3AC14A09AFEBBFC37EF34708
00:17:14.765   10:14:09 sma.sma_crypto -- sma/crypto.sh@142 -- # [[ 334E37FB3AC14A09AFEBBFC37EF34708 == \3\3\4\E\3\7\F\B\3\A\C\1\4\A\0\9\A\F\E\B\B\F\C\3\7\E\F\3\4\7\0\8 ]]
00:17:14.765   10:14:09 sma.sma_crypto -- sma/crypto.sh@243 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 334e37fb-3ac1-4a09-afeb-bfc37ef34708
00:17:14.765   10:14:09 sma.sma_crypto -- sma/crypto.sh@120 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:14.765    10:14:09 sma.sma_crypto -- sma/crypto.sh@120 -- # uuid2base64 334e37fb-3ac1-4a09-afeb-bfc37ef34708
00:17:14.765    10:14:09 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:15.024  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:15.024  I0000 00:00:1732094049.938775 1832071 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:15.024  I0000 00:00:1732094049.940599 1832071 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:15.024  I0000 00:00:1732094049.942209 1832075 subchannel.cc:806] subchannel 0x5593c9fde180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5593c9eeb1c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5593c9f8f460, grpc.internal.client_channel_call_destination=0x7f2732cf2390, grpc.internal.event_engine=0x5593c9f51440, grpc.internal.security_connector=0x5593c9e47650, grpc.internal.subchannel_pool=0x5593c9fc5c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5593c9c0e2f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:14:09.941687747+01:00"}), backing off for 999 ms
00:17:15.024  {}
00:17:15.024   10:14:10 sma.sma_crypto -- sma/crypto.sh@247 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 334e37fb-3ac1-4a09-afeb-bfc37ef34708 8 1234567890abcdef1234567890abcdef
00:17:15.024   10:14:10 sma.sma_crypto -- common/autotest_common.sh@652 -- # local es=0
00:17:15.024   10:14:10 sma.sma_crypto -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 334e37fb-3ac1-4a09-afeb-bfc37ef34708 8 1234567890abcdef1234567890abcdef
00:17:15.024   10:14:10 sma.sma_crypto -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:17:15.024   10:14:10 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:15.024    10:14:10 sma.sma_crypto -- common/autotest_common.sh@644 -- # type -t attach_volume
00:17:15.024   10:14:10 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:15.024   10:14:10 sma.sma_crypto -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 334e37fb-3ac1-4a09-afeb-bfc37ef34708 8 1234567890abcdef1234567890abcdef
00:17:15.024   10:14:10 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:15.024   10:14:10 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:17:15.024   10:14:10 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:15.024    10:14:10 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 334e37fb-3ac1-4a09-afeb-bfc37ef34708 8 1234567890abcdef1234567890abcdef
00:17:15.024    10:14:10 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=334e37fb-3ac1-4a09-afeb-bfc37ef34708 cipher=8 key=1234567890abcdef1234567890abcdef key2= config
00:17:15.024    10:14:10 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:17:15.024     10:14:10 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:17:15.024      10:14:10 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 334e37fb-3ac1-4a09-afeb-bfc37ef34708
00:17:15.024      10:14:10 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:15.024    10:14:10 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "M043+zrBSgmv67/DfvNHCA==",
00:17:15.024  "nvmf": {
00:17:15.024    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:17:15.024    "discovery": {
00:17:15.024      "discovery_endpoints": [
00:17:15.024        {
00:17:15.024          "trtype": "tcp",
00:17:15.024          "traddr": "127.0.0.1",
00:17:15.024          "trsvcid": "8009"
00:17:15.024        }
00:17:15.024      ]
00:17:15.024    }
00:17:15.024  }'
00:17:15.024    10:14:10 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:17:15.024    10:14:10 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:17:15.024    10:14:10 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n 8 ]]
00:17:15.024    10:14:10 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:17:15.024     10:14:10 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher 8
00:17:15.024     10:14:10 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:17:15.024     10:14:10 sma.sma_crypto -- sma/common.sh@30 -- # echo 8
00:17:15.024    10:14:10 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:17:15.024     10:14:10 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:17:15.024     10:14:10 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:17:15.024      10:14:10 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:17:15.024    10:14:10 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:17:15.024     10:14:10 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:17:15.024    10:14:10 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:17:15.024    "cipher": 8,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:17:15.024  }'
00:17:15.024    10:14:10 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:17:15.024    10:14:10 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:17:15.282  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:15.282  I0000 00:00:1732094050.351799 1832192 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:15.282  I0000 00:00:1732094050.353704 1832192 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:15.282  I0000 00:00:1732094050.355563 1832230 subchannel.cc:806] subchannel 0x55d6a0481180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55d6a038e1c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55d6a0432460, grpc.internal.client_channel_call_destination=0x7f45b1a49390, grpc.internal.event_engine=0x55d6a03f4440, grpc.internal.security_connector=0x55d6a0468d00, grpc.internal.subchannel_pool=0x55d6a0468c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55d6a00b12f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:14:10.354975566+01:00"}), backing off for 1000 ms
00:17:16.655  Traceback (most recent call last):
00:17:16.655    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:16.655      main(sys.argv[1:])
00:17:16.655    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:16.655      result = client.call(request['method'], request.get('params', {}))
00:17:16.655               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:16.655    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:16.655      response = func(request=json_format.ParseDict(params, input()))
00:17:16.655                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:16.655    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:16.655      return _end_unary_response_blocking(state, call, False, None)
00:17:16.655             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:16.655    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:16.655      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:16.655      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:16.655  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:16.655  	status = StatusCode.INVALID_ARGUMENT
00:17:16.655  	details = "Invalid volume crypto configuration: bad cipher"
00:17:16.655  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-11-20T10:14:11.475679896+01:00", grpc_status:3, grpc_message:"Invalid volume crypto configuration: bad cipher"}"
00:17:16.655  >
00:17:16.655   10:14:11 sma.sma_crypto -- common/autotest_common.sh@655 -- # es=1
00:17:16.655   10:14:11 sma.sma_crypto -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:16.655   10:14:11 sma.sma_crypto -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:16.655   10:14:11 sma.sma_crypto -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:16.655    10:14:11 sma.sma_crypto -- sma/crypto.sh@248 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:16.655    10:14:11 sma.sma_crypto -- sma/crypto.sh@248 -- # jq -r '.[0].namespaces | length'
00:17:16.655    10:14:11 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:16.655    10:14:11 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:16.655    10:14:11 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:16.655   10:14:11 sma.sma_crypto -- sma/crypto.sh@248 -- # [[ 0 -eq 0 ]]
00:17:16.655    10:14:11 sma.sma_crypto -- sma/crypto.sh@249 -- # rpc_cmd bdev_nvme_get_discovery_info
00:17:16.655    10:14:11 sma.sma_crypto -- sma/crypto.sh@249 -- # jq -r '. | length'
00:17:16.655    10:14:11 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:16.655    10:14:11 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:16.655    10:14:11 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:16.655   10:14:11 sma.sma_crypto -- sma/crypto.sh@249 -- # [[ 0 -eq 0 ]]
00:17:16.655    10:14:11 sma.sma_crypto -- sma/crypto.sh@250 -- # rpc_cmd bdev_get_bdevs
00:17:16.655    10:14:11 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:16.655    10:14:11 sma.sma_crypto -- sma/crypto.sh@250 -- # jq -r length
00:17:16.655    10:14:11 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:16.656    10:14:11 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:16.656   10:14:11 sma.sma_crypto -- sma/crypto.sh@250 -- # [[ 0 -eq 0 ]]
00:17:16.656   10:14:11 sma.sma_crypto -- sma/crypto.sh@252 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:16.656   10:14:11 sma.sma_crypto -- sma/crypto.sh@94 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:16.913  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:16.913  I0000 00:00:1732094051.854119 1832403 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:16.913  I0000 00:00:1732094051.856021 1832403 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:16.913  I0000 00:00:1732094051.857685 1832405 subchannel.cc:806] subchannel 0x555ecc085180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x555ecbf921c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x555ecc036460, grpc.internal.client_channel_call_destination=0x7f1f50f65390, grpc.internal.event_engine=0x555ecbff8440, grpc.internal.security_connector=0x555ecbee1da0, grpc.internal.subchannel_pool=0x555ecc06cc10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x555ecbcb52f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:14:11.857147611+01:00"}), backing off for 1000 ms
00:17:16.913  {}
00:17:16.913    10:14:11 sma.sma_crypto -- sma/crypto.sh@255 -- # create_device 334e37fb-3ac1-4a09-afeb-bfc37ef34708 AES_CBC 1234567890abcdef1234567890abcdef
00:17:16.913    10:14:11 sma.sma_crypto -- sma/crypto.sh@255 -- # jq -r .handle
00:17:16.913    10:14:11 sma.sma_crypto -- sma/crypto.sh@77 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:16.913     10:14:11 sma.sma_crypto -- sma/crypto.sh@77 -- # gen_volume_params 334e37fb-3ac1-4a09-afeb-bfc37ef34708 AES_CBC 1234567890abcdef1234567890abcdef
00:17:16.913     10:14:11 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=334e37fb-3ac1-4a09-afeb-bfc37ef34708 cipher=AES_CBC key=1234567890abcdef1234567890abcdef key2= config
00:17:16.913     10:14:11 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:17:16.913      10:14:11 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:17:16.913       10:14:11 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 334e37fb-3ac1-4a09-afeb-bfc37ef34708
00:17:16.913       10:14:11 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:16.913     10:14:11 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "M043+zrBSgmv67/DfvNHCA==",
00:17:16.913  "nvmf": {
00:17:16.913    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:17:16.913    "discovery": {
00:17:16.913      "discovery_endpoints": [
00:17:16.913        {
00:17:16.913          "trtype": "tcp",
00:17:16.913          "traddr": "127.0.0.1",
00:17:16.913          "trsvcid": "8009"
00:17:16.913        }
00:17:16.913      ]
00:17:16.913    }
00:17:16.913  }'
00:17:16.913     10:14:11 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:17:16.913     10:14:11 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:17:16.913     10:14:11 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_CBC ]]
00:17:16.913     10:14:11 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:17:16.913      10:14:11 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_CBC
00:17:16.913      10:14:11 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:17:16.913      10:14:11 sma.sma_crypto -- sma/common.sh@28 -- # echo 0
00:17:16.913     10:14:11 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:17:16.913      10:14:11 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:17:16.913      10:14:11 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/63
00:17:16.913       10:14:11 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:17:16.913     10:14:11 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:17:16.913      10:14:11 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:17:16.913     10:14:11 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:17:16.913    "cipher": 0,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:17:16.913  }'
00:17:16.913     10:14:11 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:17:16.913     10:14:11 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:17:17.171  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:17.171  I0000 00:00:1732094052.177930 1832428 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:17.171  I0000 00:00:1732094052.179789 1832428 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:17.171  I0000 00:00:1732094052.181584 1832441 subchannel.cc:806] subchannel 0x55c8c45a6d70 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55c8c444b1c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55c8c44ef460, grpc.internal.client_channel_call_destination=0x7f4a87538390, grpc.internal.event_engine=0x55c8c42e59a0, grpc.internal.security_connector=0x55c8c45279c0, grpc.internal.subchannel_pool=0x55c8c4525910, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55c8c41429a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:14:12.181058947+01:00"}), backing off for 1000 ms
00:17:18.545  [2024-11-20 10:14:13.318236] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:17:18.545   10:14:13 sma.sma_crypto -- sma/crypto.sh@255 -- # device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:18.545   10:14:13 sma.sma_crypto -- sma/crypto.sh@256 -- # verify_crypto_volume nqn.2016-06.io.spdk:cnode0 334e37fb-3ac1-4a09-afeb-bfc37ef34708
00:17:18.545   10:14:13 sma.sma_crypto -- sma/crypto.sh@132 -- # local nqn=nqn.2016-06.io.spdk:cnode0 uuid=334e37fb-3ac1-4a09-afeb-bfc37ef34708 ns ns_bdev
00:17:18.545    10:14:13 sma.sma_crypto -- sma/crypto.sh@134 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:18.545    10:14:13 sma.sma_crypto -- sma/crypto.sh@134 -- # jq -r '.[0].namespaces[0]'
00:17:18.545    10:14:13 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:18.545    10:14:13 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:18.545    10:14:13 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:18.545   10:14:13 sma.sma_crypto -- sma/crypto.sh@134 -- # ns='{
00:17:18.545    "nsid": 1,
00:17:18.545    "bdev_name": "c99bc608-17c0-4eaf-b181-4504240e0608",
00:17:18.545    "name": "c99bc608-17c0-4eaf-b181-4504240e0608",
00:17:18.545    "nguid": "334E37FB3AC14A09AFEBBFC37EF34708",
00:17:18.545    "uuid": "334e37fb-3ac1-4a09-afeb-bfc37ef34708"
00:17:18.545  }'
00:17:18.545    10:14:13 sma.sma_crypto -- sma/crypto.sh@135 -- # jq -r .name
00:17:18.545   10:14:13 sma.sma_crypto -- sma/crypto.sh@135 -- # ns_bdev=c99bc608-17c0-4eaf-b181-4504240e0608
00:17:18.545    10:14:13 sma.sma_crypto -- sma/crypto.sh@138 -- # rpc_cmd bdev_get_bdevs -b c99bc608-17c0-4eaf-b181-4504240e0608
00:17:18.545    10:14:13 sma.sma_crypto -- sma/crypto.sh@138 -- # jq -r '.[0].product_name'
00:17:18.545    10:14:13 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:18.545    10:14:13 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:18.545    10:14:13 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:18.546   10:14:13 sma.sma_crypto -- sma/crypto.sh@138 -- # [[ crypto == crypto ]]
00:17:18.546    10:14:13 sma.sma_crypto -- sma/crypto.sh@139 -- # rpc_cmd bdev_get_bdevs
00:17:18.546    10:14:13 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:18.546    10:14:13 sma.sma_crypto -- sma/crypto.sh@139 -- # jq -r '[.[] | select(.product_name == "crypto")] | length'
00:17:18.546    10:14:13 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:18.546    10:14:13 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:18.546   10:14:13 sma.sma_crypto -- sma/crypto.sh@139 -- # [[ 1 -eq 1 ]]
00:17:18.546    10:14:13 sma.sma_crypto -- sma/crypto.sh@141 -- # jq -r .uuid
00:17:18.546   10:14:13 sma.sma_crypto -- sma/crypto.sh@141 -- # [[ 334e37fb-3ac1-4a09-afeb-bfc37ef34708 == \3\3\4\e\3\7\f\b\-\3\a\c\1\-\4\a\0\9\-\a\f\e\b\-\b\f\c\3\7\e\f\3\4\7\0\8 ]]
00:17:18.546    10:14:13 sma.sma_crypto -- sma/crypto.sh@142 -- # jq -r .nguid
00:17:18.546    10:14:13 sma.sma_crypto -- sma/crypto.sh@142 -- # uuid2nguid 334e37fb-3ac1-4a09-afeb-bfc37ef34708
00:17:18.546    10:14:13 sma.sma_crypto -- sma/common.sh@40 -- # local uuid=334E37FB-3AC1-4A09-AFEB-BFC37EF34708
00:17:18.546    10:14:13 sma.sma_crypto -- sma/common.sh@41 -- # echo 334E37FB3AC14A09AFEBBFC37EF34708
00:17:18.546   10:14:13 sma.sma_crypto -- sma/crypto.sh@142 -- # [[ 334E37FB3AC14A09AFEBBFC37EF34708 == \3\3\4\E\3\7\F\B\3\A\C\1\4\A\0\9\A\F\E\B\B\F\C\3\7\E\F\3\4\7\0\8 ]]
00:17:18.546   10:14:13 sma.sma_crypto -- sma/crypto.sh@258 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 334e37fb-3ac1-4a09-afeb-bfc37ef34708
00:17:18.546   10:14:13 sma.sma_crypto -- sma/crypto.sh@120 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:18.546    10:14:13 sma.sma_crypto -- sma/crypto.sh@120 -- # uuid2base64 334e37fb-3ac1-4a09-afeb-bfc37ef34708
00:17:18.546    10:14:13 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:18.804  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:18.804  I0000 00:00:1732094053.878440 1832738 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:18.804  I0000 00:00:1732094053.880416 1832738 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:18.804  I0000 00:00:1732094053.882018 1832746 subchannel.cc:806] subchannel 0x55909b3be180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55909b2cb1c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55909b36f460, grpc.internal.client_channel_call_destination=0x7f59d5e68390, grpc.internal.event_engine=0x55909b331440, grpc.internal.security_connector=0x55909b227650, grpc.internal.subchannel_pool=0x55909b3a5c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55909afee2f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:14:13.881549969+01:00"}), backing off for 999 ms
00:17:19.062  {}
00:17:19.062   10:14:13 sma.sma_crypto -- sma/crypto.sh@259 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:19.062   10:14:13 sma.sma_crypto -- sma/crypto.sh@94 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:19.320  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:19.320  I0000 00:00:1732094054.197894 1832767 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:19.320  I0000 00:00:1732094054.199583 1832767 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:19.320  I0000 00:00:1732094054.201077 1832772 subchannel.cc:806] subchannel 0x55c5b8d88180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55c5b8c951c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55c5b8d39460, grpc.internal.client_channel_call_destination=0x7f9b0ece4390, grpc.internal.event_engine=0x55c5b8cfb440, grpc.internal.security_connector=0x55c5b8be4da0, grpc.internal.subchannel_pool=0x55c5b8d6fc10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55c5b89b82f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:14:14.200610087+01:00"}), backing off for 999 ms
00:17:19.320  {}
00:17:19.320   10:14:14 sma.sma_crypto -- sma/crypto.sh@263 -- # NOT create_device 334e37fb-3ac1-4a09-afeb-bfc37ef34708 8 1234567890abcdef1234567890abcdef
00:17:19.320   10:14:14 sma.sma_crypto -- common/autotest_common.sh@652 -- # local es=0
00:17:19.320   10:14:14 sma.sma_crypto -- common/autotest_common.sh@654 -- # valid_exec_arg create_device 334e37fb-3ac1-4a09-afeb-bfc37ef34708 8 1234567890abcdef1234567890abcdef
00:17:19.320   10:14:14 sma.sma_crypto -- common/autotest_common.sh@640 -- # local arg=create_device
00:17:19.320   10:14:14 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:19.320    10:14:14 sma.sma_crypto -- common/autotest_common.sh@644 -- # type -t create_device
00:17:19.320   10:14:14 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:19.320   10:14:14 sma.sma_crypto -- common/autotest_common.sh@655 -- # create_device 334e37fb-3ac1-4a09-afeb-bfc37ef34708 8 1234567890abcdef1234567890abcdef
00:17:19.320   10:14:14 sma.sma_crypto -- sma/crypto.sh@77 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:19.320    10:14:14 sma.sma_crypto -- sma/crypto.sh@77 -- # gen_volume_params 334e37fb-3ac1-4a09-afeb-bfc37ef34708 8 1234567890abcdef1234567890abcdef
00:17:19.320    10:14:14 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=334e37fb-3ac1-4a09-afeb-bfc37ef34708 cipher=8 key=1234567890abcdef1234567890abcdef key2= config
00:17:19.320    10:14:14 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:17:19.320     10:14:14 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:17:19.320      10:14:14 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 334e37fb-3ac1-4a09-afeb-bfc37ef34708
00:17:19.320      10:14:14 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:19.320    10:14:14 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "M043+zrBSgmv67/DfvNHCA==",
00:17:19.320  "nvmf": {
00:17:19.320    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:17:19.320    "discovery": {
00:17:19.320      "discovery_endpoints": [
00:17:19.320        {
00:17:19.320          "trtype": "tcp",
00:17:19.320          "traddr": "127.0.0.1",
00:17:19.320          "trsvcid": "8009"
00:17:19.320        }
00:17:19.320      ]
00:17:19.320    }
00:17:19.320  }'
00:17:19.320    10:14:14 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:17:19.320    10:14:14 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:17:19.320    10:14:14 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n 8 ]]
00:17:19.320    10:14:14 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:17:19.320     10:14:14 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher 8
00:17:19.320     10:14:14 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:17:19.320     10:14:14 sma.sma_crypto -- sma/common.sh@30 -- # echo 8
00:17:19.320    10:14:14 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:17:19.320     10:14:14 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:17:19.320     10:14:14 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:17:19.320      10:14:14 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:17:19.320    10:14:14 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:17:19.320     10:14:14 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:17:19.320    10:14:14 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:17:19.320    "cipher": 8,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:17:19.320  }'
00:17:19.320    10:14:14 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:17:19.320    10:14:14 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:17:19.579  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:19.579  I0000 00:00:1732094054.534568 1832793 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:19.579  I0000 00:00:1732094054.536331 1832793 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:19.579  I0000 00:00:1732094054.538206 1832876 subchannel.cc:806] subchannel 0x55f317661d70 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55f3175061c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55f3175aa460, grpc.internal.client_channel_call_destination=0x7f526c8c1390, grpc.internal.event_engine=0x55f3173a09a0, grpc.internal.security_connector=0x55f3175e29c0, grpc.internal.subchannel_pool=0x55f3175e0910, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55f3171fd9a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:14:14.537678143+01:00"}), backing off for 999 ms
00:17:20.951  Traceback (most recent call last):
00:17:20.952    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:20.952      main(sys.argv[1:])
00:17:20.952    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:20.952      result = client.call(request['method'], request.get('params', {}))
00:17:20.952               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:20.952    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:20.952      response = func(request=json_format.ParseDict(params, input()))
00:17:20.952                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:20.952    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:20.952      return _end_unary_response_blocking(state, call, False, None)
00:17:20.952             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:20.952    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:20.952      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:20.952      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:20.952  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:20.952  	status = StatusCode.INVALID_ARGUMENT
00:17:20.952  	details = "Invalid volume crypto configuration: bad cipher"
00:17:20.952  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-11-20T10:14:15.666474787+01:00", grpc_status:3, grpc_message:"Invalid volume crypto configuration: bad cipher"}"
00:17:20.952  >
00:17:20.952   10:14:15 sma.sma_crypto -- common/autotest_common.sh@655 -- # es=1
00:17:20.952   10:14:15 sma.sma_crypto -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:20.952   10:14:15 sma.sma_crypto -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:20.952   10:14:15 sma.sma_crypto -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:20.952    10:14:15 sma.sma_crypto -- sma/crypto.sh@264 -- # rpc_cmd bdev_nvme_get_discovery_info
00:17:20.952    10:14:15 sma.sma_crypto -- sma/crypto.sh@264 -- # jq -r '. | length'
00:17:20.952    10:14:15 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:20.952    10:14:15 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:20.952    10:14:15 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:20.952   10:14:15 sma.sma_crypto -- sma/crypto.sh@264 -- # [[ 0 -eq 0 ]]
00:17:20.952    10:14:15 sma.sma_crypto -- sma/crypto.sh@265 -- # jq -r length
00:17:20.952    10:14:15 sma.sma_crypto -- sma/crypto.sh@265 -- # rpc_cmd bdev_get_bdevs
00:17:20.952    10:14:15 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:20.952    10:14:15 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:20.952    10:14:15 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:20.952   10:14:15 sma.sma_crypto -- sma/crypto.sh@265 -- # [[ 0 -eq 0 ]]
00:17:20.952    10:14:15 sma.sma_crypto -- sma/crypto.sh@266 -- # rpc_cmd nvmf_get_subsystems
00:17:20.952    10:14:15 sma.sma_crypto -- sma/crypto.sh@266 -- # jq -r '[.[] | select(.nqn == "nqn.2016-06.io.spdk:cnode0")] | length'
00:17:20.952    10:14:15 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:20.952    10:14:15 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:20.952    10:14:15 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:20.952   10:14:15 sma.sma_crypto -- sma/crypto.sh@266 -- # [[ 0 -eq 0 ]]
00:17:20.952   10:14:15 sma.sma_crypto -- sma/crypto.sh@269 -- # killprocess 1830699
00:17:20.952   10:14:15 sma.sma_crypto -- common/autotest_common.sh@954 -- # '[' -z 1830699 ']'
00:17:20.952   10:14:15 sma.sma_crypto -- common/autotest_common.sh@958 -- # kill -0 1830699
00:17:20.952    10:14:15 sma.sma_crypto -- common/autotest_common.sh@959 -- # uname
00:17:20.952   10:14:15 sma.sma_crypto -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:17:20.952    10:14:15 sma.sma_crypto -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1830699
00:17:20.952   10:14:15 sma.sma_crypto -- common/autotest_common.sh@960 -- # process_name=python3
00:17:20.952   10:14:15 sma.sma_crypto -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:17:20.952   10:14:15 sma.sma_crypto -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1830699'
00:17:20.952  killing process with pid 1830699
00:17:20.952   10:14:15 sma.sma_crypto -- common/autotest_common.sh@973 -- # kill 1830699
00:17:20.952   10:14:15 sma.sma_crypto -- common/autotest_common.sh@978 -- # wait 1830699
00:17:20.952   10:14:15 sma.sma_crypto -- sma/crypto.sh@278 -- # smapid=1833099
00:17:20.952   10:14:15 sma.sma_crypto -- sma/crypto.sh@280 -- # sma_waitforlisten
00:17:20.952   10:14:15 sma.sma_crypto -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:17:20.952   10:14:15 sma.sma_crypto -- sma/common.sh@8 -- # local sma_port=8080
00:17:20.952   10:14:15 sma.sma_crypto -- sma/crypto.sh@270 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:17:20.952   10:14:15 sma.sma_crypto -- sma/common.sh@10 -- # (( i = 0 ))
00:17:20.952    10:14:15 sma.sma_crypto -- sma/crypto.sh@270 -- # cat
00:17:20.952   10:14:15 sma.sma_crypto -- sma/common.sh@10 -- # (( i < 5 ))
00:17:20.952   10:14:15 sma.sma_crypto -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:17:20.952   10:14:15 sma.sma_crypto -- sma/common.sh@14 -- # sleep 1s
00:17:21.209  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:21.210  I0000 00:00:1732094056.136978 1833099 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:22.143   10:14:16 sma.sma_crypto -- sma/common.sh@10 -- # (( i++ ))
00:17:22.143   10:14:16 sma.sma_crypto -- sma/common.sh@10 -- # (( i < 5 ))
00:17:22.143   10:14:16 sma.sma_crypto -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:17:22.143   10:14:16 sma.sma_crypto -- sma/common.sh@12 -- # return 0
00:17:22.143    10:14:16 sma.sma_crypto -- sma/crypto.sh@281 -- # create_device
00:17:22.143    10:14:16 sma.sma_crypto -- sma/crypto.sh@281 -- # jq -r .handle
00:17:22.143    10:14:16 sma.sma_crypto -- sma/crypto.sh@77 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:22.143  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:22.143  I0000 00:00:1732094057.174640 1833262 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:22.143  I0000 00:00:1732094057.176325 1833262 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:22.143  I0000 00:00:1732094057.177925 1833263 subchannel.cc:806] subchannel 0x555bb9f49180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x555bb9e561c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x555bb9efa460, grpc.internal.client_channel_call_destination=0x7f1dd562e390, grpc.internal.event_engine=0x555bb9ebc440, grpc.internal.security_connector=0x555bb9db2650, grpc.internal.subchannel_pool=0x555bb9f30c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x555bb9b792f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:14:17.177361282+01:00"}), backing off for 1000 ms
00:17:22.143  [2024-11-20 10:14:17.199210] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:17:22.143   10:14:17 sma.sma_crypto -- sma/crypto.sh@281 -- # device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:22.143   10:14:17 sma.sma_crypto -- sma/crypto.sh@283 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 334e37fb-3ac1-4a09-afeb-bfc37ef34708 AES_CBC 1234567890abcdef1234567890abcdef
00:17:22.143   10:14:17 sma.sma_crypto -- common/autotest_common.sh@652 -- # local es=0
00:17:22.143   10:14:17 sma.sma_crypto -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 334e37fb-3ac1-4a09-afeb-bfc37ef34708 AES_CBC 1234567890abcdef1234567890abcdef
00:17:22.143   10:14:17 sma.sma_crypto -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:17:22.143   10:14:17 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:22.143    10:14:17 sma.sma_crypto -- common/autotest_common.sh@644 -- # type -t attach_volume
00:17:22.143   10:14:17 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:22.143   10:14:17 sma.sma_crypto -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 334e37fb-3ac1-4a09-afeb-bfc37ef34708 AES_CBC 1234567890abcdef1234567890abcdef
00:17:22.143   10:14:17 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:22.143   10:14:17 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:17:22.143   10:14:17 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:22.143    10:14:17 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 334e37fb-3ac1-4a09-afeb-bfc37ef34708 AES_CBC 1234567890abcdef1234567890abcdef
00:17:22.143    10:14:17 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=334e37fb-3ac1-4a09-afeb-bfc37ef34708 cipher=AES_CBC key=1234567890abcdef1234567890abcdef key2= config
00:17:22.143    10:14:17 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:17:22.143     10:14:17 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:17:22.143      10:14:17 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 334e37fb-3ac1-4a09-afeb-bfc37ef34708
00:17:22.143      10:14:17 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:22.402    10:14:17 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "M043+zrBSgmv67/DfvNHCA==",
00:17:22.402  "nvmf": {
00:17:22.402    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:17:22.402    "discovery": {
00:17:22.402      "discovery_endpoints": [
00:17:22.402        {
00:17:22.402          "trtype": "tcp",
00:17:22.402          "traddr": "127.0.0.1",
00:17:22.402          "trsvcid": "8009"
00:17:22.402        }
00:17:22.402      ]
00:17:22.402    }
00:17:22.402  }'
00:17:22.402    10:14:17 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:17:22.402    10:14:17 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:17:22.402    10:14:17 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_CBC ]]
00:17:22.402    10:14:17 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:17:22.402     10:14:17 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_CBC
00:17:22.402     10:14:17 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:17:22.402     10:14:17 sma.sma_crypto -- sma/common.sh@28 -- # echo 0
00:17:22.402    10:14:17 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:17:22.402     10:14:17 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:17:22.402     10:14:17 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:17:22.402      10:14:17 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:17:22.402    10:14:17 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:17:22.402     10:14:17 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:17:22.402    10:14:17 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:17:22.402    "cipher": 0,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:17:22.402  }'
00:17:22.402    10:14:17 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:17:22.402    10:14:17 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:17:22.402  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:22.402  I0000 00:00:1732094057.516599 1833291 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:22.402  I0000 00:00:1732094057.518415 1833291 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:22.402  I0000 00:00:1732094057.520116 1833313 subchannel.cc:806] subchannel 0x557ff537b180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x557ff52881c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x557ff532c460, grpc.internal.client_channel_call_destination=0x7f69e3c02390, grpc.internal.event_engine=0x557ff52ee440, grpc.internal.security_connector=0x557ff5362d00, grpc.internal.subchannel_pool=0x557ff5362c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x557ff4fab2f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:14:17.519641164+01:00"}), backing off for 999 ms
00:17:23.774  Traceback (most recent call last):
00:17:23.774    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:23.774      main(sys.argv[1:])
00:17:23.774    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:23.774      result = client.call(request['method'], request.get('params', {}))
00:17:23.774               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:23.774    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:23.774      response = func(request=json_format.ParseDict(params, input()))
00:17:23.774                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:23.774    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:23.774      return _end_unary_response_blocking(state, call, False, None)
00:17:23.774             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:23.774    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:23.774      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:23.774      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:23.774  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:23.774  	status = StatusCode.INVALID_ARGUMENT
00:17:23.775  	details = "Crypto is disabled"
00:17:23.775  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-11-20T10:14:18.640866826+01:00", grpc_status:3, grpc_message:"Crypto is disabled"}"
00:17:23.775  >
00:17:23.775   10:14:18 sma.sma_crypto -- common/autotest_common.sh@655 -- # es=1
00:17:23.775   10:14:18 sma.sma_crypto -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:23.775   10:14:18 sma.sma_crypto -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:23.775   10:14:18 sma.sma_crypto -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:23.775    10:14:18 sma.sma_crypto -- sma/crypto.sh@284 -- # rpc_cmd bdev_nvme_get_discovery_info
00:17:23.775    10:14:18 sma.sma_crypto -- sma/crypto.sh@284 -- # jq -r '. | length'
00:17:23.775    10:14:18 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:23.775    10:14:18 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:23.775    10:14:18 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:23.775   10:14:18 sma.sma_crypto -- sma/crypto.sh@284 -- # [[ 0 -eq 0 ]]
00:17:23.775    10:14:18 sma.sma_crypto -- sma/crypto.sh@285 -- # rpc_cmd bdev_get_bdevs
00:17:23.775    10:14:18 sma.sma_crypto -- sma/crypto.sh@285 -- # jq -r length
00:17:23.775    10:14:18 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:23.775    10:14:18 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:23.775    10:14:18 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:23.775   10:14:18 sma.sma_crypto -- sma/crypto.sh@285 -- # [[ 0 -eq 0 ]]
00:17:23.775   10:14:18 sma.sma_crypto -- sma/crypto.sh@287 -- # cleanup
00:17:23.775   10:14:18 sma.sma_crypto -- sma/crypto.sh@22 -- # killprocess 1833099
00:17:23.775   10:14:18 sma.sma_crypto -- common/autotest_common.sh@954 -- # '[' -z 1833099 ']'
00:17:23.775   10:14:18 sma.sma_crypto -- common/autotest_common.sh@958 -- # kill -0 1833099
00:17:23.775    10:14:18 sma.sma_crypto -- common/autotest_common.sh@959 -- # uname
00:17:23.775   10:14:18 sma.sma_crypto -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:17:23.775    10:14:18 sma.sma_crypto -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1833099
00:17:23.775   10:14:18 sma.sma_crypto -- common/autotest_common.sh@960 -- # process_name=python3
00:17:23.775   10:14:18 sma.sma_crypto -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:17:23.775   10:14:18 sma.sma_crypto -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1833099'
00:17:23.775  killing process with pid 1833099
00:17:23.775   10:14:18 sma.sma_crypto -- common/autotest_common.sh@973 -- # kill 1833099
00:17:23.775   10:14:18 sma.sma_crypto -- common/autotest_common.sh@978 -- # wait 1833099
00:17:23.775   10:14:18 sma.sma_crypto -- sma/crypto.sh@23 -- # killprocess 1830300
00:17:23.775   10:14:18 sma.sma_crypto -- common/autotest_common.sh@954 -- # '[' -z 1830300 ']'
00:17:23.775   10:14:18 sma.sma_crypto -- common/autotest_common.sh@958 -- # kill -0 1830300
00:17:23.775    10:14:18 sma.sma_crypto -- common/autotest_common.sh@959 -- # uname
00:17:23.775   10:14:18 sma.sma_crypto -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:17:23.775    10:14:18 sma.sma_crypto -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1830300
00:17:23.775   10:14:18 sma.sma_crypto -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:17:23.775   10:14:18 sma.sma_crypto -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:17:23.775   10:14:18 sma.sma_crypto -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1830300'
00:17:23.775  killing process with pid 1830300
00:17:23.775   10:14:18 sma.sma_crypto -- common/autotest_common.sh@973 -- # kill 1830300
00:17:23.775   10:14:18 sma.sma_crypto -- common/autotest_common.sh@978 -- # wait 1830300
00:17:25.673   10:14:20 sma.sma_crypto -- sma/crypto.sh@24 -- # killprocess 1830698
00:17:25.673   10:14:20 sma.sma_crypto -- common/autotest_common.sh@954 -- # '[' -z 1830698 ']'
00:17:25.673   10:14:20 sma.sma_crypto -- common/autotest_common.sh@958 -- # kill -0 1830698
00:17:25.673    10:14:20 sma.sma_crypto -- common/autotest_common.sh@959 -- # uname
00:17:25.673   10:14:20 sma.sma_crypto -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:17:25.673    10:14:20 sma.sma_crypto -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1830698
00:17:25.673   10:14:20 sma.sma_crypto -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:17:25.673   10:14:20 sma.sma_crypto -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:17:25.673   10:14:20 sma.sma_crypto -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1830698'
00:17:25.673  killing process with pid 1830698
00:17:25.673   10:14:20 sma.sma_crypto -- common/autotest_common.sh@973 -- # kill 1830698
00:17:25.673   10:14:20 sma.sma_crypto -- common/autotest_common.sh@978 -- # wait 1830698
00:17:28.203   10:14:22 sma.sma_crypto -- sma/crypto.sh@288 -- # trap - SIGINT SIGTERM EXIT
00:17:28.203  
00:17:28.203  real	0m24.793s
00:17:28.203  user	0m51.567s
00:17:28.203  sys	0m3.349s
00:17:28.203   10:14:22 sma.sma_crypto -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:28.203   10:14:22 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:28.203  ************************************
00:17:28.203  END TEST sma_crypto
00:17:28.203  ************************************
00:17:28.203   10:14:22 sma -- sma/sma.sh@17 -- # run_test sma_qos /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/qos.sh
00:17:28.203   10:14:22 sma -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:28.203   10:14:22 sma -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:28.203   10:14:22 sma -- common/autotest_common.sh@10 -- # set +x
00:17:28.203  ************************************
00:17:28.203  START TEST sma_qos
00:17:28.203  ************************************
00:17:28.203   10:14:22 sma.sma_qos -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/qos.sh
00:17:28.203  * Looking for test storage...
00:17:28.203  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:17:28.203    10:14:22 sma.sma_qos -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:17:28.203     10:14:22 sma.sma_qos -- common/autotest_common.sh@1693 -- # lcov --version
00:17:28.203     10:14:22 sma.sma_qos -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:17:28.203    10:14:23 sma.sma_qos -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:17:28.203    10:14:23 sma.sma_qos -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:17:28.203    10:14:23 sma.sma_qos -- scripts/common.sh@333 -- # local ver1 ver1_l
00:17:28.203    10:14:23 sma.sma_qos -- scripts/common.sh@334 -- # local ver2 ver2_l
00:17:28.203    10:14:23 sma.sma_qos -- scripts/common.sh@336 -- # IFS=.-:
00:17:28.203    10:14:23 sma.sma_qos -- scripts/common.sh@336 -- # read -ra ver1
00:17:28.203    10:14:23 sma.sma_qos -- scripts/common.sh@337 -- # IFS=.-:
00:17:28.203    10:14:23 sma.sma_qos -- scripts/common.sh@337 -- # read -ra ver2
00:17:28.203    10:14:23 sma.sma_qos -- scripts/common.sh@338 -- # local 'op=<'
00:17:28.203    10:14:23 sma.sma_qos -- scripts/common.sh@340 -- # ver1_l=2
00:17:28.203    10:14:23 sma.sma_qos -- scripts/common.sh@341 -- # ver2_l=1
00:17:28.203    10:14:23 sma.sma_qos -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:17:28.203    10:14:23 sma.sma_qos -- scripts/common.sh@344 -- # case "$op" in
00:17:28.203    10:14:23 sma.sma_qos -- scripts/common.sh@345 -- # : 1
00:17:28.203    10:14:23 sma.sma_qos -- scripts/common.sh@364 -- # (( v = 0 ))
00:17:28.203    10:14:23 sma.sma_qos -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:17:28.203     10:14:23 sma.sma_qos -- scripts/common.sh@365 -- # decimal 1
00:17:28.203     10:14:23 sma.sma_qos -- scripts/common.sh@353 -- # local d=1
00:17:28.203     10:14:23 sma.sma_qos -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:28.203     10:14:23 sma.sma_qos -- scripts/common.sh@355 -- # echo 1
00:17:28.203    10:14:23 sma.sma_qos -- scripts/common.sh@365 -- # ver1[v]=1
00:17:28.203     10:14:23 sma.sma_qos -- scripts/common.sh@366 -- # decimal 2
00:17:28.203     10:14:23 sma.sma_qos -- scripts/common.sh@353 -- # local d=2
00:17:28.203     10:14:23 sma.sma_qos -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:17:28.203     10:14:23 sma.sma_qos -- scripts/common.sh@355 -- # echo 2
00:17:28.203    10:14:23 sma.sma_qos -- scripts/common.sh@366 -- # ver2[v]=2
00:17:28.203    10:14:23 sma.sma_qos -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:17:28.203    10:14:23 sma.sma_qos -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:17:28.203    10:14:23 sma.sma_qos -- scripts/common.sh@368 -- # return 0
00:17:28.203    10:14:23 sma.sma_qos -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:17:28.203    10:14:23 sma.sma_qos -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:17:28.203  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:28.203  		--rc genhtml_branch_coverage=1
00:17:28.203  		--rc genhtml_function_coverage=1
00:17:28.203  		--rc genhtml_legend=1
00:17:28.203  		--rc geninfo_all_blocks=1
00:17:28.203  		--rc geninfo_unexecuted_blocks=1
00:17:28.203  		
00:17:28.203  		'
00:17:28.203    10:14:23 sma.sma_qos -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:17:28.203  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:28.203  		--rc genhtml_branch_coverage=1
00:17:28.203  		--rc genhtml_function_coverage=1
00:17:28.203  		--rc genhtml_legend=1
00:17:28.203  		--rc geninfo_all_blocks=1
00:17:28.203  		--rc geninfo_unexecuted_blocks=1
00:17:28.203  		
00:17:28.203  		'
00:17:28.203    10:14:23 sma.sma_qos -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:17:28.204  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:28.204  		--rc genhtml_branch_coverage=1
00:17:28.204  		--rc genhtml_function_coverage=1
00:17:28.204  		--rc genhtml_legend=1
00:17:28.204  		--rc geninfo_all_blocks=1
00:17:28.204  		--rc geninfo_unexecuted_blocks=1
00:17:28.204  		
00:17:28.204  		'
00:17:28.204    10:14:23 sma.sma_qos -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:17:28.204  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:28.204  		--rc genhtml_branch_coverage=1
00:17:28.204  		--rc genhtml_function_coverage=1
00:17:28.204  		--rc genhtml_legend=1
00:17:28.204  		--rc geninfo_all_blocks=1
00:17:28.204  		--rc geninfo_unexecuted_blocks=1
00:17:28.204  		
00:17:28.204  		'
00:17:28.204   10:14:23 sma.sma_qos -- sma/qos.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:17:28.204   10:14:23 sma.sma_qos -- sma/qos.sh@13 -- # smac=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:28.204   10:14:23 sma.sma_qos -- sma/qos.sh@15 -- # device_nvmf_tcp=3
00:17:28.204    10:14:23 sma.sma_qos -- sma/qos.sh@16 -- # printf %u -1
00:17:28.204   10:14:23 sma.sma_qos -- sma/qos.sh@16 -- # limit_reserved=18446744073709551615
00:17:28.204   10:14:23 sma.sma_qos -- sma/qos.sh@42 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:17:28.204   10:14:23 sma.sma_qos -- sma/qos.sh@45 -- # tgtpid=1834073
00:17:28.204   10:14:23 sma.sma_qos -- sma/qos.sh@44 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:17:28.204   10:14:23 sma.sma_qos -- sma/qos.sh@55 -- # smapid=1834074
00:17:28.204   10:14:23 sma.sma_qos -- sma/qos.sh@57 -- # sma_waitforlisten
00:17:28.204   10:14:23 sma.sma_qos -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:17:28.204   10:14:23 sma.sma_qos -- sma/common.sh@8 -- # local sma_port=8080
00:17:28.204   10:14:23 sma.sma_qos -- sma/common.sh@10 -- # (( i = 0 ))
00:17:28.204   10:14:23 sma.sma_qos -- sma/common.sh@10 -- # (( i < 5 ))
00:17:28.204   10:14:23 sma.sma_qos -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:17:28.204   10:14:23 sma.sma_qos -- sma/qos.sh@47 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:17:28.204    10:14:23 sma.sma_qos -- sma/qos.sh@47 -- # cat
00:17:28.204   10:14:23 sma.sma_qos -- sma/common.sh@14 -- # sleep 1s
00:17:28.204  [2024-11-20 10:14:23.133332] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization...
00:17:28.204  [2024-11-20 10:14:23.133469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1834073 ]
00:17:28.204  EAL: No free 2048 kB hugepages reported on node 1
00:17:28.204  [2024-11-20 10:14:23.262006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:28.460  [2024-11-20 10:14:23.376507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:17:29.026   10:14:24 sma.sma_qos -- sma/common.sh@10 -- # (( i++ ))
00:17:29.026   10:14:24 sma.sma_qos -- sma/common.sh@10 -- # (( i < 5 ))
00:17:29.026   10:14:24 sma.sma_qos -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:17:29.026   10:14:24 sma.sma_qos -- sma/common.sh@14 -- # sleep 1s
00:17:29.283  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:29.283  I0000 00:00:1732094064.279797 1834074 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:29.283  [2024-11-20 10:14:24.293679] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:17:30.216   10:14:25 sma.sma_qos -- sma/common.sh@10 -- # (( i++ ))
00:17:30.216   10:14:25 sma.sma_qos -- sma/common.sh@10 -- # (( i < 5 ))
00:17:30.216   10:14:25 sma.sma_qos -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:17:30.216   10:14:25 sma.sma_qos -- sma/common.sh@12 -- # return 0
00:17:30.216   10:14:25 sma.sma_qos -- sma/qos.sh@60 -- # rpc_cmd bdev_null_create null0 100 4096
00:17:30.216   10:14:25 sma.sma_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:30.216   10:14:25 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x
00:17:30.216  null0
00:17:30.216   10:14:25 sma.sma_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:30.216    10:14:25 sma.sma_qos -- sma/qos.sh@61 -- # rpc_cmd bdev_get_bdevs -b null0
00:17:30.216    10:14:25 sma.sma_qos -- sma/qos.sh@61 -- # jq -r '.[].uuid'
00:17:30.216    10:14:25 sma.sma_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:30.216    10:14:25 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x
00:17:30.216    10:14:25 sma.sma_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:30.216   10:14:25 sma.sma_qos -- sma/qos.sh@61 -- # uuid=a0e0b259-d2c3-47c2-9b24-646cff363179
00:17:30.216    10:14:25 sma.sma_qos -- sma/qos.sh@62 -- # create_device a0e0b259-d2c3-47c2-9b24-646cff363179
00:17:30.216    10:14:25 sma.sma_qos -- sma/qos.sh@24 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:30.216    10:14:25 sma.sma_qos -- sma/qos.sh@62 -- # jq -r .handle
00:17:30.216     10:14:25 sma.sma_qos -- sma/qos.sh@24 -- # uuid2base64 a0e0b259-d2c3-47c2-9b24-646cff363179
00:17:30.216     10:14:25 sma.sma_qos -- sma/common.sh@20 -- # python
00:17:30.473  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:30.473  I0000 00:00:1732094065.443980 1834374 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:30.473  I0000 00:00:1732094065.445836 1834374 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:30.473  I0000 00:00:1732094065.447547 1834384 subchannel.cc:806] subchannel 0x55fff5fc7180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55fff5ed41c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55fff5f78460, grpc.internal.client_channel_call_destination=0x7f103b1af390, grpc.internal.event_engine=0x55fff5f3a440, grpc.internal.security_connector=0x55fff5faed00, grpc.internal.subchannel_pool=0x55fff5faec10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55fff5bf72f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:14:25.446995208+01:00"}), backing off for 1000 ms
00:17:30.473  [2024-11-20 10:14:25.477733] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:17:30.473   10:14:25 sma.sma_qos -- sma/qos.sh@62 -- # device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:30.473   10:14:25 sma.sma_qos -- sma/qos.sh@65 -- # diff /dev/fd/62 /dev/fd/61
00:17:30.473    10:14:25 sma.sma_qos -- sma/qos.sh@65 -- # jq --sort-keys
00:17:30.473    10:14:25 sma.sma_qos -- sma/qos.sh@65 -- # get_qos_caps 3
00:17:30.473    10:14:25 sma.sma_qos -- sma/common.sh@45 -- # local rootdir
00:17:30.473    10:14:25 sma.sma_qos -- sma/qos.sh@65 -- # jq --sort-keys
00:17:30.473     10:14:25 sma.sma_qos -- sma/common.sh@47 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:17:30.473    10:14:25 sma.sma_qos -- sma/common.sh@47 -- # rootdir=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../..
00:17:30.473    10:14:25 sma.sma_qos -- sma/common.sh@49 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py
00:17:30.730  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:30.730  I0000 00:00:1732094065.742947 1834413 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:30.730  I0000 00:00:1732094065.744783 1834413 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:30.730  I0000 00:00:1732094065.746306 1834421 subchannel.cc:806] subchannel 0x557602580650 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55760242b520, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x557602377060, grpc.internal.client_channel_call_destination=0x7f2cd5168390, grpc.internal.event_engine=0x557602444e50, grpc.internal.security_connector=0x55760232dcb0, grpc.internal.subchannel_pool=0x55760245cd10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x557602224200, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:14:25.745786525+01:00"}), backing off for 1000 ms
00:17:30.730   10:14:25 sma.sma_qos -- sma/qos.sh@79 -- # NOT get_qos_caps 1234
00:17:30.730   10:14:25 sma.sma_qos -- common/autotest_common.sh@652 -- # local es=0
00:17:30.730   10:14:25 sma.sma_qos -- common/autotest_common.sh@654 -- # valid_exec_arg get_qos_caps 1234
00:17:30.730   10:14:25 sma.sma_qos -- common/autotest_common.sh@640 -- # local arg=get_qos_caps
00:17:30.730   10:14:25 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:30.730    10:14:25 sma.sma_qos -- common/autotest_common.sh@644 -- # type -t get_qos_caps
00:17:30.730   10:14:25 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:30.730   10:14:25 sma.sma_qos -- common/autotest_common.sh@655 -- # get_qos_caps 1234
00:17:30.730   10:14:25 sma.sma_qos -- sma/common.sh@45 -- # local rootdir
00:17:30.730    10:14:25 sma.sma_qos -- sma/common.sh@47 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:17:30.730   10:14:25 sma.sma_qos -- sma/common.sh@47 -- # rootdir=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../..
00:17:30.730   10:14:25 sma.sma_qos -- sma/common.sh@49 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py
00:17:30.987  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:30.987  I0000 00:00:1732094066.011765 1834444 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:30.987  I0000 00:00:1732094066.013530 1834444 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:30.987  I0000 00:00:1732094066.014992 1834563 subchannel.cc:806] subchannel 0x5594847c7650 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x559484672520, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5594845be060, grpc.internal.client_channel_call_destination=0x7fdfb285f390, grpc.internal.event_engine=0x55948468be50, grpc.internal.security_connector=0x559484574cb0, grpc.internal.subchannel_pool=0x5594846a3d10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55948446b200, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:14:26.014478062+01:00"}), backing off for 999 ms
00:17:30.987  Traceback (most recent call last):
00:17:30.987    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py", line 74, in <module>
00:17:30.987      main(sys.argv[1:])
00:17:30.987    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py", line 69, in main
00:17:30.987      result = client.call(request['method'], request.get('params', {}))
00:17:30.987               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:30.987    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py", line 43, in call
00:17:30.987      response = func(request=json_format.ParseDict(params, input()))
00:17:30.987                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:30.987    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:30.987      return _end_unary_response_blocking(state, call, False, None)
00:17:30.987             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:30.987    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:30.987      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:30.987      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:30.987  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:30.987  	status = StatusCode.INVALID_ARGUMENT
00:17:30.987  	details = "Invalid device type"
00:17:30.987  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-11-20T10:14:26.016150365+01:00", grpc_status:3, grpc_message:"Invalid device type"}"
00:17:30.987  >
00:17:30.987   10:14:26 sma.sma_qos -- common/autotest_common.sh@655 -- # es=1
00:17:30.988   10:14:26 sma.sma_qos -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:30.988   10:14:26 sma.sma_qos -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:30.988   10:14:26 sma.sma_qos -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:30.988   10:14:26 sma.sma_qos -- sma/qos.sh@82 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:30.988    10:14:26 sma.sma_qos -- sma/qos.sh@82 -- # uuid2base64 a0e0b259-d2c3-47c2-9b24-646cff363179
00:17:30.988    10:14:26 sma.sma_qos -- sma/common.sh@20 -- # python
00:17:31.245  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:31.245  I0000 00:00:1732094066.307613 1834583 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:31.245  I0000 00:00:1732094066.309521 1834583 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:31.245  I0000 00:00:1732094066.311084 1834586 subchannel.cc:806] subchannel 0x5570c921f180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5570c912c1c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5570c91d0460, grpc.internal.client_channel_call_destination=0x7fb598530390, grpc.internal.event_engine=0x5570c9192440, grpc.internal.security_connector=0x5570c9206d00, grpc.internal.subchannel_pool=0x5570c9206c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5570c8e4f2f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:14:26.310600582+01:00"}), backing off for 999 ms
00:17:31.245  {}
00:17:31.245   10:14:26 sma.sma_qos -- sma/qos.sh@94 -- # diff /dev/fd/62 /dev/fd/61
00:17:31.245    10:14:26 sma.sma_qos -- sma/qos.sh@94 -- # jq --sort-keys
00:17:31.245    10:14:26 sma.sma_qos -- sma/qos.sh@94 -- # rpc_cmd bdev_get_bdevs -b null0
00:17:31.245    10:14:26 sma.sma_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:31.245    10:14:26 sma.sma_qos -- sma/qos.sh@94 -- # jq --sort-keys '.[].assigned_rate_limits'
00:17:31.245    10:14:26 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x
00:17:31.503    10:14:26 sma.sma_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:31.503   10:14:26 sma.sma_qos -- sma/qos.sh@106 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:31.503    10:14:26 sma.sma_qos -- sma/qos.sh@106 -- # uuid2base64 a0e0b259-d2c3-47c2-9b24-646cff363179
00:17:31.503    10:14:26 sma.sma_qos -- sma/common.sh@20 -- # python
00:17:31.760  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:31.760  I0000 00:00:1732094066.672019 1834614 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:31.760  I0000 00:00:1732094066.674014 1834614 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:31.760  I0000 00:00:1732094066.675730 1834630 subchannel.cc:806] subchannel 0x5565e9368180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5565e92751c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5565e9319460, grpc.internal.client_channel_call_destination=0x7f1985309390, grpc.internal.event_engine=0x5565e92db440, grpc.internal.security_connector=0x5565e934fd00, grpc.internal.subchannel_pool=0x5565e934fc10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5565e8f982f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:14:26.675194136+01:00"}), backing off for 1000 ms
00:17:31.760  {}
00:17:31.760   10:14:26 sma.sma_qos -- sma/qos.sh@119 -- # diff /dev/fd/62 /dev/fd/61
00:17:31.760    10:14:26 sma.sma_qos -- sma/qos.sh@119 -- # jq --sort-keys
00:17:31.760    10:14:26 sma.sma_qos -- sma/qos.sh@119 -- # rpc_cmd bdev_get_bdevs -b null0
00:17:31.760    10:14:26 sma.sma_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:31.760    10:14:26 sma.sma_qos -- sma/qos.sh@119 -- # jq --sort-keys '.[].assigned_rate_limits'
00:17:31.760    10:14:26 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x
00:17:31.760    10:14:26 sma.sma_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:31.760   10:14:26 sma.sma_qos -- sma/qos.sh@131 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:31.760    10:14:26 sma.sma_qos -- sma/qos.sh@131 -- # uuid2base64 a0e0b259-d2c3-47c2-9b24-646cff363179
00:17:31.760    10:14:26 sma.sma_qos -- sma/common.sh@20 -- # python
00:17:32.018  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:32.018  I0000 00:00:1732094067.027680 1834656 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:32.018  I0000 00:00:1732094067.029728 1834656 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:32.018  I0000 00:00:1732094067.031395 1834745 subchannel.cc:806] subchannel 0x561291151180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x56129105e1c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x561291102460, grpc.internal.client_channel_call_destination=0x7f9fd8c71390, grpc.internal.event_engine=0x5612910c4440, grpc.internal.security_connector=0x561291138d00, grpc.internal.subchannel_pool=0x561291138c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x561290d812f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:14:27.030892034+01:00"}), backing off for 1000 ms
00:17:32.018  {}
00:17:32.018   10:14:27 sma.sma_qos -- sma/qos.sh@145 -- # diff /dev/fd/62 /dev/fd/61
00:17:32.018    10:14:27 sma.sma_qos -- sma/qos.sh@145 -- # jq --sort-keys
00:17:32.018    10:14:27 sma.sma_qos -- sma/qos.sh@145 -- # rpc_cmd bdev_get_bdevs -b null0
00:17:32.018    10:14:27 sma.sma_qos -- sma/qos.sh@145 -- # jq --sort-keys '.[].assigned_rate_limits'
00:17:32.018    10:14:27 sma.sma_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:32.018    10:14:27 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x
00:17:32.018    10:14:27 sma.sma_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:32.018   10:14:27 sma.sma_qos -- sma/qos.sh@157 -- # unsupported_max_limits=(rd_iops wr_iops)
00:17:32.018   10:14:27 sma.sma_qos -- sma/qos.sh@159 -- # for limit in "${unsupported_max_limits[@]}"
00:17:32.018   10:14:27 sma.sma_qos -- sma/qos.sh@160 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:32.018    10:14:27 sma.sma_qos -- sma/qos.sh@160 -- # uuid2base64 a0e0b259-d2c3-47c2-9b24-646cff363179
00:17:32.018    10:14:27 sma.sma_qos -- sma/common.sh@20 -- # python
00:17:32.276   10:14:27 sma.sma_qos -- common/autotest_common.sh@652 -- # local es=0
00:17:32.276   10:14:27 sma.sma_qos -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:32.276   10:14:27 sma.sma_qos -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:32.276   10:14:27 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:32.276    10:14:27 sma.sma_qos -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:32.276   10:14:27 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:32.276    10:14:27 sma.sma_qos -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:32.276   10:14:27 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:32.276   10:14:27 sma.sma_qos -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:32.276   10:14:27 sma.sma_qos -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:17:32.276   10:14:27 sma.sma_qos -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:32.534  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:32.534  I0000 00:00:1732094067.398614 1834808 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:32.534  I0000 00:00:1732094067.400468 1834808 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:32.534  I0000 00:00:1732094067.402028 1834809 subchannel.cc:806] subchannel 0x55600f168180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55600f0751c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55600f119460, grpc.internal.client_channel_call_destination=0x7f473a33f390, grpc.internal.event_engine=0x55600f0db440, grpc.internal.security_connector=0x55600efd1650, grpc.internal.subchannel_pool=0x55600f14fc10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55600ed982f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:14:27.401546329+01:00"}), backing off for 999 ms
00:17:32.534  Traceback (most recent call last):
00:17:32.534    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:32.534      main(sys.argv[1:])
00:17:32.534    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:32.534      result = client.call(request['method'], request.get('params', {}))
00:17:32.534               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:32.534    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:32.534      response = func(request=json_format.ParseDict(params, input()))
00:17:32.534                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:32.534    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:32.534      return _end_unary_response_blocking(state, call, False, None)
00:17:32.534             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:32.534    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:32.534      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:32.534      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:32.534  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:32.534  	status = StatusCode.INVALID_ARGUMENT
00:17:32.534  	details = "Unsupported QoS limit: maximum.rd_iops"
00:17:32.534  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Unsupported QoS limit: maximum.rd_iops", grpc_status:3, created_time:"2024-11-20T10:14:27.419475951+01:00"}"
00:17:32.534  >
00:17:32.534   10:14:27 sma.sma_qos -- common/autotest_common.sh@655 -- # es=1
00:17:32.534   10:14:27 sma.sma_qos -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:32.534   10:14:27 sma.sma_qos -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:32.534   10:14:27 sma.sma_qos -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:32.534   10:14:27 sma.sma_qos -- sma/qos.sh@159 -- # for limit in "${unsupported_max_limits[@]}"
00:17:32.534   10:14:27 sma.sma_qos -- sma/qos.sh@160 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:32.534    10:14:27 sma.sma_qos -- sma/qos.sh@160 -- # uuid2base64 a0e0b259-d2c3-47c2-9b24-646cff363179
00:17:32.534    10:14:27 sma.sma_qos -- sma/common.sh@20 -- # python
00:17:32.534   10:14:27 sma.sma_qos -- common/autotest_common.sh@652 -- # local es=0
00:17:32.534   10:14:27 sma.sma_qos -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:32.534   10:14:27 sma.sma_qos -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:32.534   10:14:27 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:32.534    10:14:27 sma.sma_qos -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:32.534   10:14:27 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:32.534    10:14:27 sma.sma_qos -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:32.534   10:14:27 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:32.534   10:14:27 sma.sma_qos -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:32.534   10:14:27 sma.sma_qos -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:17:32.534   10:14:27 sma.sma_qos -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:32.792  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:32.792  I0000 00:00:1732094067.722968 1834833 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:32.792  I0000 00:00:1732094067.724764 1834833 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:32.792  I0000 00:00:1732094067.726411 1834846 subchannel.cc:806] subchannel 0x5601ee5b2180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5601ee4bf1c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5601ee563460, grpc.internal.client_channel_call_destination=0x7f6405b96390, grpc.internal.event_engine=0x5601ee525440, grpc.internal.security_connector=0x5601ee41b650, grpc.internal.subchannel_pool=0x5601ee599c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5601ee1e22f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:14:27.725905605+01:00"}), backing off for 1000 ms
00:17:32.792  Traceback (most recent call last):
00:17:32.792    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:32.792      main(sys.argv[1:])
00:17:32.792    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:32.792      result = client.call(request['method'], request.get('params', {}))
00:17:32.792               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:32.792    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:32.792      response = func(request=json_format.ParseDict(params, input()))
00:17:32.792                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:32.792    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:32.792      return _end_unary_response_blocking(state, call, False, None)
00:17:32.792             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:32.792    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:32.792      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:32.792      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:32.792  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:32.792  	status = StatusCode.INVALID_ARGUMENT
00:17:32.792  	details = "Unsupported QoS limit: maximum.wr_iops"
00:17:32.792  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-11-20T10:14:27.740354786+01:00", grpc_status:3, grpc_message:"Unsupported QoS limit: maximum.wr_iops"}"
00:17:32.792  >
00:17:32.792   10:14:27 sma.sma_qos -- common/autotest_common.sh@655 -- # es=1
00:17:32.792   10:14:27 sma.sma_qos -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:32.792   10:14:27 sma.sma_qos -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:32.792   10:14:27 sma.sma_qos -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:32.792   10:14:27 sma.sma_qos -- sma/qos.sh@178 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:32.792    10:14:27 sma.sma_qos -- sma/qos.sh@178 -- # uuid2base64 a0e0b259-d2c3-47c2-9b24-646cff363179
00:17:32.792    10:14:27 sma.sma_qos -- sma/common.sh@20 -- # python
00:17:32.792   10:14:27 sma.sma_qos -- common/autotest_common.sh@652 -- # local es=0
00:17:32.792   10:14:27 sma.sma_qos -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:32.792   10:14:27 sma.sma_qos -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:32.792   10:14:27 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:32.792    10:14:27 sma.sma_qos -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:32.792   10:14:27 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:32.792    10:14:27 sma.sma_qos -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:32.792   10:14:27 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:32.792   10:14:27 sma.sma_qos -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:32.792   10:14:27 sma.sma_qos -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:17:32.792   10:14:27 sma.sma_qos -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:33.050  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:33.050  I0000 00:00:1732094068.033551 1834870 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:33.050  I0000 00:00:1732094068.035385 1834870 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:33.050  I0000 00:00:1732094068.037023 1834871 subchannel.cc:806] subchannel 0x5619cc3dd180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5619cc2ea1c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5619cc38e460, grpc.internal.client_channel_call_destination=0x7ffb70639390, grpc.internal.event_engine=0x5619cc350440, grpc.internal.security_connector=0x5619cc3c4d00, grpc.internal.subchannel_pool=0x5619cc3c4c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5619cc00d2f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:14:28.036526694+01:00"}), backing off for 999 ms
00:17:33.050  [2024-11-20 10:14:28.048931] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:cnode0-invalid' does not exist
00:17:33.050  Traceback (most recent call last):
00:17:33.050    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:33.050      main(sys.argv[1:])
00:17:33.050    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:33.050      result = client.call(request['method'], request.get('params', {}))
00:17:33.050               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:33.050    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:33.050      response = func(request=json_format.ParseDict(params, input()))
00:17:33.050                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:33.050    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:33.050      return _end_unary_response_blocking(state, call, False, None)
00:17:33.050             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:33.050    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:33.050      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:33.050      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:33.050  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:33.050  	status = StatusCode.NOT_FOUND
00:17:33.050  	details = "No device associated with device_handle could be found"
00:17:33.050  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-11-20T10:14:28.053273255+01:00", grpc_status:5, grpc_message:"No device associated with device_handle could be found"}"
00:17:33.050  >
00:17:33.050   10:14:28 sma.sma_qos -- common/autotest_common.sh@655 -- # es=1
00:17:33.050   10:14:28 sma.sma_qos -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:33.050   10:14:28 sma.sma_qos -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:33.050   10:14:28 sma.sma_qos -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:33.050   10:14:28 sma.sma_qos -- sma/qos.sh@191 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:33.050     10:14:28 sma.sma_qos -- sma/qos.sh@191 -- # uuidgen
00:17:33.050    10:14:28 sma.sma_qos -- sma/qos.sh@191 -- # uuid2base64 29226ed9-8102-4767-9166-bcbe5c5786b3
00:17:33.050    10:14:28 sma.sma_qos -- sma/common.sh@20 -- # python
00:17:33.050   10:14:28 sma.sma_qos -- common/autotest_common.sh@652 -- # local es=0
00:17:33.050   10:14:28 sma.sma_qos -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:33.050   10:14:28 sma.sma_qos -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:33.050   10:14:28 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:33.050    10:14:28 sma.sma_qos -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:33.050   10:14:28 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:33.050    10:14:28 sma.sma_qos -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:33.050   10:14:28 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:33.050   10:14:28 sma.sma_qos -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:33.050   10:14:28 sma.sma_qos -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:17:33.050   10:14:28 sma.sma_qos -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:33.308  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:33.308  I0000 00:00:1732094068.363203 1834968 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:33.308  I0000 00:00:1732094068.364922 1834968 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:33.308  I0000 00:00:1732094068.366669 1835022 subchannel.cc:806] subchannel 0x5607ea5a5180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5607ea4b21c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5607ea556460, grpc.internal.client_channel_call_destination=0x7fb9d59b1390, grpc.internal.event_engine=0x5607ea518440, grpc.internal.security_connector=0x5607ea58cd00, grpc.internal.subchannel_pool=0x5607ea58cc10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5607ea1d52f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:14:28.366112485+01:00"}), backing off for 1000 ms
00:17:33.308  [2024-11-20 10:14:28.373996] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 29226ed9-8102-4767-9166-bcbe5c5786b3
00:17:33.308  Traceback (most recent call last):
00:17:33.308    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:33.308      main(sys.argv[1:])
00:17:33.308    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:33.308      result = client.call(request['method'], request.get('params', {}))
00:17:33.308               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:33.308    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:33.308      response = func(request=json_format.ParseDict(params, input()))
00:17:33.308                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:33.308    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:33.308      return _end_unary_response_blocking(state, call, False, None)
00:17:33.308             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:33.308    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:33.308      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:33.308      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:33.308  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:33.308  	status = StatusCode.NOT_FOUND
00:17:33.308  	details = "No volume associated with volume_id could be found"
00:17:33.308  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-11-20T10:14:28.378347171+01:00", grpc_status:5, grpc_message:"No volume associated with volume_id could be found"}"
00:17:33.308  >
00:17:33.308   10:14:28 sma.sma_qos -- common/autotest_common.sh@655 -- # es=1
00:17:33.308   10:14:28 sma.sma_qos -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:33.308   10:14:28 sma.sma_qos -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:33.308   10:14:28 sma.sma_qos -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:33.308   10:14:28 sma.sma_qos -- sma/qos.sh@205 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:33.308   10:14:28 sma.sma_qos -- common/autotest_common.sh@652 -- # local es=0
00:17:33.308   10:14:28 sma.sma_qos -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:33.308   10:14:28 sma.sma_qos -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:33.308   10:14:28 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:33.308    10:14:28 sma.sma_qos -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:33.308   10:14:28 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:33.308    10:14:28 sma.sma_qos -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:33.308   10:14:28 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:33.308   10:14:28 sma.sma_qos -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:33.308   10:14:28 sma.sma_qos -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:17:33.308   10:14:28 sma.sma_qos -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:33.566  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:33.566  I0000 00:00:1732094068.636326 1835044 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:33.566  I0000 00:00:1732094068.638336 1835044 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:33.566  I0000 00:00:1732094068.639946 1835054 subchannel.cc:806] subchannel 0x55ba51ff0180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55ba51efd1c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55ba51fa1460, grpc.internal.client_channel_call_destination=0x7f6c0eec1390, grpc.internal.event_engine=0x55ba51f63440, grpc.internal.security_connector=0x55ba51e59650, grpc.internal.subchannel_pool=0x55ba51fd7c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55ba51c202f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:14:28.639441521+01:00"}), backing off for 999 ms
00:17:33.566  Traceback (most recent call last):
00:17:33.566    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:33.566      main(sys.argv[1:])
00:17:33.566    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:33.566      result = client.call(request['method'], request.get('params', {}))
00:17:33.566               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:33.566    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:33.566      response = func(request=json_format.ParseDict(params, input()))
00:17:33.566                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:33.566    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:33.566      return _end_unary_response_blocking(state, call, False, None)
00:17:33.566             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:33.566    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:33.566      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:33.566      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:33.566  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:33.566  	status = StatusCode.INVALID_ARGUMENT
00:17:33.566  	details = "Invalid volume ID"
00:17:33.566  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Invalid volume ID", grpc_status:3, created_time:"2024-11-20T10:14:28.641385932+01:00"}"
00:17:33.566  >
00:17:33.566   10:14:28 sma.sma_qos -- common/autotest_common.sh@655 -- # es=1
00:17:33.566   10:14:28 sma.sma_qos -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:33.566   10:14:28 sma.sma_qos -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:33.566   10:14:28 sma.sma_qos -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:33.566   10:14:28 sma.sma_qos -- sma/qos.sh@217 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:33.566    10:14:28 sma.sma_qos -- sma/qos.sh@217 -- # uuid2base64 a0e0b259-d2c3-47c2-9b24-646cff363179
00:17:33.566    10:14:28 sma.sma_qos -- sma/common.sh@20 -- # python
00:17:33.824   10:14:28 sma.sma_qos -- common/autotest_common.sh@652 -- # local es=0
00:17:33.824   10:14:28 sma.sma_qos -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:33.824   10:14:28 sma.sma_qos -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:33.824   10:14:28 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:33.824    10:14:28 sma.sma_qos -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:33.824   10:14:28 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:33.824    10:14:28 sma.sma_qos -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:33.824   10:14:28 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:33.824   10:14:28 sma.sma_qos -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:33.824   10:14:28 sma.sma_qos -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:17:33.824   10:14:28 sma.sma_qos -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:33.824  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:33.824  I0000 00:00:1732094068.942319 1835080 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:34.083  I0000 00:00:1732094068.944258 1835080 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:34.083  I0000 00:00:1732094068.945915 1835082 subchannel.cc:806] subchannel 0x55909a17e180 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55909a08b1c0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55909a12f460, grpc.internal.client_channel_call_destination=0x7f39f3bb0390, grpc.internal.event_engine=0x55909a0f1440, grpc.internal.security_connector=0x55909a165d00, grpc.internal.subchannel_pool=0x55909a165c10, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x559099dae2f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-11-20T10:14:28.945394508+01:00"}), backing off for 1000 ms
00:17:34.083  Traceback (most recent call last):
00:17:34.083    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:34.083      main(sys.argv[1:])
00:17:34.083    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:34.083      result = client.call(request['method'], request.get('params', {}))
00:17:34.083               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:34.083    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:34.083      response = func(request=json_format.ParseDict(params, input()))
00:17:34.083                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:34.083    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:34.083      return _end_unary_response_blocking(state, call, False, None)
00:17:34.083             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:34.083    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:34.083      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:34.083      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:34.083  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:34.083  	status = StatusCode.NOT_FOUND
00:17:34.083  	details = "Invalid device handle"
00:17:34.083  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-11-20T10:14:28.947000703+01:00", grpc_status:5, grpc_message:"Invalid device handle"}"
00:17:34.083  >
00:17:34.083   10:14:28 sma.sma_qos -- common/autotest_common.sh@655 -- # es=1
00:17:34.083   10:14:28 sma.sma_qos -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:34.083   10:14:28 sma.sma_qos -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:34.083   10:14:28 sma.sma_qos -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:34.083   10:14:28 sma.sma_qos -- sma/qos.sh@230 -- # diff /dev/fd/62 /dev/fd/61
00:17:34.083    10:14:28 sma.sma_qos -- sma/qos.sh@230 -- # jq --sort-keys
00:17:34.083    10:14:28 sma.sma_qos -- sma/qos.sh@230 -- # rpc_cmd bdev_get_bdevs -b null0
00:17:34.083    10:14:28 sma.sma_qos -- sma/qos.sh@230 -- # jq --sort-keys '.[].assigned_rate_limits'
00:17:34.083    10:14:28 sma.sma_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:34.083    10:14:28 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x
00:17:34.083    10:14:28 sma.sma_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:34.083   10:14:29 sma.sma_qos -- sma/qos.sh@241 -- # trap - SIGINT SIGTERM EXIT
00:17:34.083   10:14:29 sma.sma_qos -- sma/qos.sh@242 -- # cleanup
00:17:34.083   10:14:29 sma.sma_qos -- sma/qos.sh@19 -- # killprocess 1834073
00:17:34.083   10:14:29 sma.sma_qos -- common/autotest_common.sh@954 -- # '[' -z 1834073 ']'
00:17:34.083   10:14:29 sma.sma_qos -- common/autotest_common.sh@958 -- # kill -0 1834073
00:17:34.083    10:14:29 sma.sma_qos -- common/autotest_common.sh@959 -- # uname
00:17:34.083   10:14:29 sma.sma_qos -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:17:34.083    10:14:29 sma.sma_qos -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1834073
00:17:34.083   10:14:29 sma.sma_qos -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:17:34.083   10:14:29 sma.sma_qos -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:17:34.083   10:14:29 sma.sma_qos -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1834073'
00:17:34.083  killing process with pid 1834073
00:17:34.083   10:14:29 sma.sma_qos -- common/autotest_common.sh@973 -- # kill 1834073
00:17:34.083   10:14:29 sma.sma_qos -- common/autotest_common.sh@978 -- # wait 1834073
00:17:36.034   10:14:31 sma.sma_qos -- sma/qos.sh@20 -- # killprocess 1834074
00:17:36.034   10:14:31 sma.sma_qos -- common/autotest_common.sh@954 -- # '[' -z 1834074 ']'
00:17:36.034   10:14:31 sma.sma_qos -- common/autotest_common.sh@958 -- # kill -0 1834074
00:17:36.034    10:14:31 sma.sma_qos -- common/autotest_common.sh@959 -- # uname
00:17:36.034   10:14:31 sma.sma_qos -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:17:36.034    10:14:31 sma.sma_qos -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1834074
00:17:36.034   10:14:31 sma.sma_qos -- common/autotest_common.sh@960 -- # process_name=python3
00:17:36.034   10:14:31 sma.sma_qos -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:17:36.034   10:14:31 sma.sma_qos -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1834074'
00:17:36.034  killing process with pid 1834074
00:17:36.034   10:14:31 sma.sma_qos -- common/autotest_common.sh@973 -- # kill 1834074
00:17:36.034   10:14:31 sma.sma_qos -- common/autotest_common.sh@978 -- # wait 1834074
00:17:36.292  
00:17:36.292  real	0m8.292s
00:17:36.292  user	0m11.305s
00:17:36.292  sys	0m1.320s
00:17:36.292   10:14:31 sma.sma_qos -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:36.292   10:14:31 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x
00:17:36.292  ************************************
00:17:36.292  END TEST sma_qos
00:17:36.292  ************************************
00:17:36.292  
00:17:36.292  real	3m43.244s
00:17:36.292  user	6m39.030s
00:17:36.292  sys	0m26.148s
00:17:36.292   10:14:31 sma -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:36.292   10:14:31 sma -- common/autotest_common.sh@10 -- # set +x
00:17:36.292  ************************************
00:17:36.292  END TEST sma
00:17:36.292  ************************************
00:17:36.292   10:14:31  -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]]
00:17:36.292   10:14:31  -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]]
00:17:36.292   10:14:31  -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT
00:17:36.292   10:14:31  -- spdk/autotest.sh@387 -- # timing_enter post_cleanup
00:17:36.292   10:14:31  -- common/autotest_common.sh@726 -- # xtrace_disable
00:17:36.292   10:14:31  -- common/autotest_common.sh@10 -- # set +x
00:17:36.292   10:14:31  -- spdk/autotest.sh@388 -- # autotest_cleanup
00:17:36.292   10:14:31  -- common/autotest_common.sh@1396 -- # local autotest_es=0
00:17:36.292   10:14:31  -- common/autotest_common.sh@1397 -- # xtrace_disable
00:17:36.292   10:14:31  -- common/autotest_common.sh@10 -- # set +x
00:17:38.192  INFO: APP EXITING
00:17:38.192  INFO: killing all VMs
00:17:38.192  INFO: killing vhost app
00:17:38.192  INFO: EXIT DONE
00:17:39.567  0000:00:04.7 (8086 0e27): Already using the ioatdma driver
00:17:39.567  0000:00:04.6 (8086 0e26): Already using the ioatdma driver
00:17:39.567  0000:00:04.5 (8086 0e25): Already using the ioatdma driver
00:17:39.567  0000:00:04.4 (8086 0e24): Already using the ioatdma driver
00:17:39.567  0000:00:04.3 (8086 0e23): Already using the ioatdma driver
00:17:39.567  0000:00:04.2 (8086 0e22): Already using the ioatdma driver
00:17:39.567  0000:00:04.1 (8086 0e21): Already using the ioatdma driver
00:17:39.567  0000:00:04.0 (8086 0e20): Already using the ioatdma driver
00:17:39.567  0000:80:04.7 (8086 0e27): Already using the ioatdma driver
00:17:39.567  0000:80:04.6 (8086 0e26): Already using the ioatdma driver
00:17:39.567  0000:80:04.5 (8086 0e25): Already using the ioatdma driver
00:17:39.567  0000:80:04.4 (8086 0e24): Already using the ioatdma driver
00:17:39.567  0000:80:04.3 (8086 0e23): Already using the ioatdma driver
00:17:39.567  0000:80:04.2 (8086 0e22): Already using the ioatdma driver
00:17:39.567  0000:80:04.1 (8086 0e21): Already using the ioatdma driver
00:17:39.567  0000:80:04.0 (8086 0e20): Already using the ioatdma driver
00:17:39.826  0000:85:00.0 (8086 0a54): Already using the nvme driver
00:17:41.203  Cleaning
00:17:41.203  Removing:    /dev/shm/spdk_tgt_trace.pid1735645
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1733015
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1734025
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1735645
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1736363
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1737203
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1737612
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1738584
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1738729
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1739304
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1739774
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1740255
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1740845
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1741311
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1741473
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1741752
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1741946
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1742509
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1745760
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1746195
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1746628
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1746767
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1747991
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1748135
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1749241
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1749377
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1749806
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1749951
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1750374
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1750513
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1751557
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1751712
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1752040
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1753458
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1761645
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1769864
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1779536
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1789009
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1789533
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1794861
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1802771
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1807214
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1811824
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1815128
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1815129
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1815130
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1827183
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1830300
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1830698
00:17:41.203  Removing:    /var/run/dpdk/spdk_pid1834073
00:17:41.203  Clean
00:17:41.203   10:14:36  -- common/autotest_common.sh@1453 -- # return 0
00:17:41.203   10:14:36  -- spdk/autotest.sh@389 -- # timing_exit post_cleanup
00:17:41.203   10:14:36  -- common/autotest_common.sh@732 -- # xtrace_disable
00:17:41.203   10:14:36  -- common/autotest_common.sh@10 -- # set +x
00:17:41.203   10:14:36  -- spdk/autotest.sh@391 -- # timing_exit autotest
00:17:41.203   10:14:36  -- common/autotest_common.sh@732 -- # xtrace_disable
00:17:41.203   10:14:36  -- common/autotest_common.sh@10 -- # set +x
00:17:41.203   10:14:36  -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/timing.txt
00:17:41.203   10:14:36  -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/udev.log ]]
00:17:41.203   10:14:36  -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/udev.log
00:17:41.203   10:14:36  -- spdk/autotest.sh@396 -- # [[ y == y ]]
00:17:41.203    10:14:36  -- spdk/autotest.sh@398 -- # hostname
00:17:41.203   10:14:36  -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk -t spdk-gp-13 -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_test.info
00:17:41.462  geninfo: WARNING: invalid characters removed from testname!
00:18:13.551   10:15:06  -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info
00:18:16.081   10:15:11  -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info
00:18:19.362   10:15:14  -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info
00:18:21.891   10:15:16  -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info
00:18:25.175   10:15:19  -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info
00:18:27.714   10:15:22  -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info
00:18:31.903   10:15:26  -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR
00:18:31.903   10:15:26  -- spdk/autorun.sh@1 -- $ timing_finish
00:18:31.903   10:15:26  -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/timing.txt ]]
00:18:31.903   10:15:26  -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl
00:18:31.903   10:15:26  -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]]
00:18:31.903   10:15:26  -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/timing.txt
00:18:31.903  + [[ -n 1653281 ]]
00:18:31.903  + sudo kill 1653281
00:18:31.913  [Pipeline] }
00:18:31.926  [Pipeline] // stage
00:18:31.930  [Pipeline] }
00:18:31.942  [Pipeline] // timeout
00:18:31.944  [Pipeline] }
00:18:31.954  [Pipeline] // catchError
00:18:31.957  [Pipeline] }
00:18:31.970  [Pipeline] // wrap
00:18:31.975  [Pipeline] }
00:18:31.987  [Pipeline] // catchError
00:18:31.993  [Pipeline] stage
00:18:31.995  [Pipeline] { (Epilogue)
00:18:32.006  [Pipeline] catchError
00:18:32.007  [Pipeline] {
00:18:32.015  [Pipeline] echo
00:18:32.016  Cleanup processes
00:18:32.020  [Pipeline] sh
00:18:32.303  + sudo pgrep -af /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:18:32.303  1842352 sudo pgrep -af /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:18:32.315  [Pipeline] sh
00:18:32.594  ++ sudo pgrep -af /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:18:32.594  ++ awk '{print $1}'
00:18:32.594  ++ grep -v 'sudo pgrep'
00:18:32.594  + sudo kill -9
00:18:32.594  + true
00:18:32.608  [Pipeline] sh
00:18:32.921  + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh
00:18:41.066  [Pipeline] sh
00:18:41.355  + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh
00:18:41.355  Artifacts sizes are good
00:18:41.369  [Pipeline] archiveArtifacts
00:18:41.378  Archiving artifacts
00:18:41.510  [Pipeline] sh
00:18:41.798  + sudo chown -R sys_sgci: /var/jenkins/workspace/vfio-user-phy-autotest
00:18:41.813  [Pipeline] cleanWs
00:18:41.823  [WS-CLEANUP] Deleting project workspace...
00:18:41.823  [WS-CLEANUP] Deferred wipeout is used...
00:18:41.830  [WS-CLEANUP] done
00:18:41.832  [Pipeline] }
00:18:41.849  [Pipeline] // catchError
00:18:41.862  [Pipeline] sh
00:18:42.145  + logger -p user.info -t JENKINS-CI
00:18:42.153  [Pipeline] }
00:18:42.167  [Pipeline] // stage
00:18:42.175  [Pipeline] }
00:18:42.188  [Pipeline] // node
00:18:42.193  [Pipeline] End of Pipeline
00:18:42.230  Finished: SUCCESS