00:00:00.001  Started by upstream project "autotest-per-patch" build number 132838
00:00:00.001  originally caused by:
00:00:00.001   Started by user sys_sgci
00:00:00.014  Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/vfio-user-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy
00:00:00.015  The recommended git tool is: git
00:00:00.015  using credential 00000000-0000-0000-0000-000000000002
00:00:00.022   > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/vfio-user-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10
00:00:00.038  Fetching changes from the remote Git repository
00:00:00.040   > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10
00:00:00.062  Using shallow fetch with depth 1
00:00:00.062  Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool
00:00:00.062   > git --version # timeout=10
00:00:00.084   > git --version # 'git version 2.39.2'
00:00:00.084  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:00.107  Setting http proxy: proxy-dmz.intel.com:911
00:00:00.107   > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5
00:00:02.571   > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10
00:00:02.581   > git rev-parse FETCH_HEAD^{commit} # timeout=10
00:00:02.590  Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD)
00:00:02.590   > git config core.sparsecheckout # timeout=10
00:00:02.600   > git read-tree -mu HEAD # timeout=10
00:00:02.614   > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5
00:00:02.629  Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag"
00:00:02.629   > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10
00:00:02.739  [Pipeline] Start of Pipeline
00:00:02.753  [Pipeline] library
00:00:02.755  Loading library shm_lib@master
00:00:02.755  Library shm_lib@master is cached. Copying from home.
00:00:02.773  [Pipeline] node
00:00:02.790  Running on WFP17 in /var/jenkins/workspace/vfio-user-phy-autotest
00:00:02.792  [Pipeline] {
00:00:02.803  [Pipeline] catchError
00:00:02.804  [Pipeline] {
00:00:02.818  [Pipeline] wrap
00:00:02.827  [Pipeline] {
00:00:02.835  [Pipeline] stage
00:00:02.837  [Pipeline] { (Prologue)
00:00:03.066  [Pipeline] sh
00:00:03.348  + logger -p user.info -t JENKINS-CI
00:00:03.363  [Pipeline] echo
00:00:03.365  Node: WFP17
00:00:03.371  [Pipeline] sh
00:00:03.667  [Pipeline] setCustomBuildProperty
00:00:03.679  [Pipeline] echo
00:00:03.680  Cleanup processes
00:00:03.686  [Pipeline] sh
00:00:03.969  + sudo pgrep -af /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:00:03.969  4133853 sudo pgrep -af /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:00:03.981  [Pipeline] sh
00:00:04.260  ++ sudo pgrep -af /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:00:04.260  ++ grep -v 'sudo pgrep'
00:00:04.260  ++ awk '{print $1}'
00:00:04.260  + sudo kill -9
00:00:04.260  + true
00:00:04.273  [Pipeline] cleanWs
00:00:04.280  [WS-CLEANUP] Deleting project workspace...
00:00:04.280  [WS-CLEANUP] Deferred wipeout is used...
00:00:04.286  [WS-CLEANUP] done
00:00:04.290  [Pipeline] setCustomBuildProperty
00:00:04.303  [Pipeline] sh
00:00:04.580  + sudo git config --global --replace-all safe.directory '*'
00:00:04.661  [Pipeline] httpRequest
00:00:05.081  [Pipeline] echo
00:00:05.082  Sorcerer 10.211.164.101 is alive
00:00:05.087  [Pipeline] retry
00:00:05.089  [Pipeline] {
00:00:05.096  [Pipeline] httpRequest
00:00:05.099  HttpMethod: GET
00:00:05.100  URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:05.100  Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:05.106  Response Code: HTTP/1.1 200 OK
00:00:05.107  Success: Status code 200 is in the accepted range: 200,404
00:00:05.107  Saving response body to /var/jenkins/workspace/vfio-user-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:05.699  [Pipeline] }
00:00:05.715  [Pipeline] // retry
00:00:05.720  [Pipeline] sh
00:00:05.997  + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:06.011  [Pipeline] httpRequest
00:00:09.048  [Pipeline] echo
00:00:09.049  Sorcerer 10.211.164.101 is dead
00:00:09.057  [Pipeline] httpRequest
00:00:09.589  [Pipeline] echo
00:00:09.590  Sorcerer 10.211.164.101 is alive
00:00:09.597  [Pipeline] retry
00:00:09.598  [Pipeline] {
00:00:09.611  [Pipeline] httpRequest
00:00:09.615  HttpMethod: GET
00:00:09.615  URL: http://10.211.164.101/packages/spdk_6263899172182e027030cd18a9502d00497c00eb.tar.gz
00:00:09.616  Sending request to url: http://10.211.164.101/packages/spdk_6263899172182e027030cd18a9502d00497c00eb.tar.gz
00:00:09.623  Response Code: HTTP/1.1 200 OK
00:00:09.623  Success: Status code 200 is in the accepted range: 200,404
00:00:09.623  Saving response body to /var/jenkins/workspace/vfio-user-phy-autotest/spdk_6263899172182e027030cd18a9502d00497c00eb.tar.gz
00:00:37.142  [Pipeline] }
00:00:37.159  [Pipeline] // retry
00:00:37.166  [Pipeline] sh
00:00:37.451  + tar --no-same-owner -xf spdk_6263899172182e027030cd18a9502d00497c00eb.tar.gz
00:00:41.656  [Pipeline] sh
00:00:41.940  + git -C spdk log --oneline -n5
00:00:41.940  626389917 nvme/rdma: Don't limit max_sge if UMR is used
00:00:41.940  cec5ba284 nvme/rdma: Register UMR per IO request
00:00:41.940  7219bd1a7 thread: use extended version of fd group add
00:00:41.940  1a5bdab32 event: use extended version of fd group add
00:00:41.940  92d1e663a bdev/nvme: Fix depopulating a namespace twice
00:00:41.950  [Pipeline] }
00:00:41.963  [Pipeline] // stage
00:00:41.971  [Pipeline] stage
00:00:41.973  [Pipeline] { (Prepare)
00:00:41.993  [Pipeline] writeFile
00:00:42.008  [Pipeline] sh
00:00:42.291  + logger -p user.info -t JENKINS-CI
00:00:42.303  [Pipeline] sh
00:00:42.586  + logger -p user.info -t JENKINS-CI
00:00:42.598  [Pipeline] sh
00:00:42.904  + cat autorun-spdk.conf
00:00:42.904  SPDK_RUN_FUNCTIONAL_TEST=1
00:00:42.904  SPDK_TEST_VFIOUSER_QEMU=1
00:00:42.904  SPDK_RUN_ASAN=1
00:00:42.904  SPDK_RUN_UBSAN=1
00:00:42.904  SPDK_TEST_SMA=1
00:00:42.930  RUN_NIGHTLY=0
00:00:42.934  [Pipeline] readFile
00:00:42.954  [Pipeline] copyArtifacts
00:00:45.726  Copied 1 artifact from "qemu-vfio" build number 34
00:00:45.731  [Pipeline] sh
00:00:46.017  + tar xf qemu-vfio.tar.gz
00:00:47.945  [Pipeline] copyArtifacts
00:00:47.969  Copied 1 artifact from "vagrant-build-vhost" build number 6
00:00:47.973  [Pipeline] sh
00:00:48.262  + sudo mkdir -p /var/spdk/dependencies/vhost
00:00:48.274  [Pipeline] sh
00:00:48.556  + cd /var/spdk/dependencies/vhost
00:00:48.556  + md5sum --quiet -c /var/jenkins/workspace/vfio-user-phy-autotest/spdk_test_image.qcow2.gz.md5
00:00:51.863  [Pipeline] withEnv
00:00:51.866  [Pipeline] {
00:00:51.884  [Pipeline] sh
00:00:52.172  + set -ex
00:00:52.172  + [[ -f /var/jenkins/workspace/vfio-user-phy-autotest/autorun-spdk.conf ]]
00:00:52.172  + source /var/jenkins/workspace/vfio-user-phy-autotest/autorun-spdk.conf
00:00:52.172  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:00:52.172  ++ SPDK_TEST_VFIOUSER_QEMU=1
00:00:52.172  ++ SPDK_RUN_ASAN=1
00:00:52.172  ++ SPDK_RUN_UBSAN=1
00:00:52.172  ++ SPDK_TEST_SMA=1
00:00:52.172  ++ RUN_NIGHTLY=0
00:00:52.172  + case $SPDK_TEST_NVMF_NICS in
00:00:52.172  + DRIVERS=
00:00:52.172  + [[ -n '' ]]
00:00:52.172  + exit 0
00:00:52.182  [Pipeline] }
00:00:52.197  [Pipeline] // withEnv
00:00:52.202  [Pipeline] }
00:00:52.216  [Pipeline] // stage
00:00:52.225  [Pipeline] catchError
00:00:52.227  [Pipeline] {
00:00:52.242  [Pipeline] timeout
00:00:52.242  Timeout set to expire in 35 min
00:00:52.244  [Pipeline] {
00:00:52.258  [Pipeline] stage
00:00:52.260  [Pipeline] { (Tests)
00:00:52.274  [Pipeline] sh
00:00:52.561  + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/vfio-user-phy-autotest
00:00:52.561  ++ readlink -f /var/jenkins/workspace/vfio-user-phy-autotest
00:00:52.561  + DIR_ROOT=/var/jenkins/workspace/vfio-user-phy-autotest
00:00:52.561  + [[ -n /var/jenkins/workspace/vfio-user-phy-autotest ]]
00:00:52.561  + DIR_SPDK=/var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:00:52.561  + DIR_OUTPUT=/var/jenkins/workspace/vfio-user-phy-autotest/output
00:00:52.561  + [[ -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk ]]
00:00:52.561  + [[ ! -d /var/jenkins/workspace/vfio-user-phy-autotest/output ]]
00:00:52.561  + mkdir -p /var/jenkins/workspace/vfio-user-phy-autotest/output
00:00:52.561  + [[ -d /var/jenkins/workspace/vfio-user-phy-autotest/output ]]
00:00:52.561  + [[ vfio-user-phy-autotest == pkgdep-* ]]
00:00:52.561  + cd /var/jenkins/workspace/vfio-user-phy-autotest
00:00:52.561  + source /etc/os-release
00:00:52.561  ++ NAME='Fedora Linux'
00:00:52.561  ++ VERSION='39 (Cloud Edition)'
00:00:52.561  ++ ID=fedora
00:00:52.561  ++ VERSION_ID=39
00:00:52.561  ++ VERSION_CODENAME=
00:00:52.561  ++ PLATFORM_ID=platform:f39
00:00:52.561  ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)'
00:00:52.561  ++ ANSI_COLOR='0;38;2;60;110;180'
00:00:52.561  ++ LOGO=fedora-logo-icon
00:00:52.561  ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39
00:00:52.561  ++ HOME_URL=https://fedoraproject.org/
00:00:52.562  ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/
00:00:52.562  ++ SUPPORT_URL=https://ask.fedoraproject.org/
00:00:52.562  ++ BUG_REPORT_URL=https://bugzilla.redhat.com/
00:00:52.562  ++ REDHAT_BUGZILLA_PRODUCT=Fedora
00:00:52.562  ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39
00:00:52.562  ++ REDHAT_SUPPORT_PRODUCT=Fedora
00:00:52.562  ++ REDHAT_SUPPORT_PRODUCT_VERSION=39
00:00:52.562  ++ SUPPORT_END=2024-11-12
00:00:52.562  ++ VARIANT='Cloud Edition'
00:00:52.562  ++ VARIANT_ID=cloud
00:00:52.562  + uname -a
00:00:52.562  Linux spdk-wfp-17 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux
00:00:52.562  + sudo /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh status
00:00:53.507  Hugepages
00:00:53.507  node     hugesize     free /  total
00:00:53.507  node0   1048576kB        0 /      0
00:00:53.507  node0      2048kB        0 /      0
00:00:53.507  node1   1048576kB        0 /      0
00:00:53.507  node1      2048kB        0 /      0
00:00:53.507  
00:00:53.507  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:00:53.507  I/OAT                     0000:00:04.0    8086   6f20   0       ioatdma          -          -
00:00:53.507  I/OAT                     0000:00:04.1    8086   6f21   0       ioatdma          -          -
00:00:53.507  I/OAT                     0000:00:04.2    8086   6f22   0       ioatdma          -          -
00:00:53.507  I/OAT                     0000:00:04.3    8086   6f23   0       ioatdma          -          -
00:00:53.507  I/OAT                     0000:00:04.4    8086   6f24   0       ioatdma          -          -
00:00:53.507  I/OAT                     0000:00:04.5    8086   6f25   0       ioatdma          -          -
00:00:53.507  I/OAT                     0000:00:04.6    8086   6f26   0       ioatdma          -          -
00:00:53.507  I/OAT                     0000:00:04.7    8086   6f27   0       ioatdma          -          -
00:00:53.507  NVMe                      0000:0d:00.0    8086   0a54   0       nvme             nvme0      nvme0n1
00:00:53.507  I/OAT                     0000:80:04.0    8086   6f20   1       ioatdma          -          -
00:00:53.507  I/OAT                     0000:80:04.1    8086   6f21   1       ioatdma          -          -
00:00:53.507  I/OAT                     0000:80:04.2    8086   6f22   1       ioatdma          -          -
00:00:53.507  I/OAT                     0000:80:04.3    8086   6f23   1       ioatdma          -          -
00:00:53.507  I/OAT                     0000:80:04.4    8086   6f24   1       ioatdma          -          -
00:00:53.507  I/OAT                     0000:80:04.5    8086   6f25   1       ioatdma          -          -
00:00:53.507  I/OAT                     0000:80:04.6    8086   6f26   1       ioatdma          -          -
00:00:53.507  I/OAT                     0000:80:04.7    8086   6f27   1       ioatdma          -          -
00:00:53.507  + rm -f /tmp/spdk-ld-path
00:00:53.507  + source autorun-spdk.conf
00:00:53.507  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:00:53.507  ++ SPDK_TEST_VFIOUSER_QEMU=1
00:00:53.507  ++ SPDK_RUN_ASAN=1
00:00:53.507  ++ SPDK_RUN_UBSAN=1
00:00:53.507  ++ SPDK_TEST_SMA=1
00:00:53.507  ++ RUN_NIGHTLY=0
00:00:53.507  + ((  SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1  ))
00:00:53.507  + [[ -n '' ]]
00:00:53.507  + sudo git config --global --add safe.directory /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:00:53.507  + for M in /var/spdk/build-*-manifest.txt
00:00:53.507  + [[ -f /var/spdk/build-kernel-manifest.txt ]]
00:00:53.507  + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/vfio-user-phy-autotest/output/
00:00:53.507  + for M in /var/spdk/build-*-manifest.txt
00:00:53.507  + [[ -f /var/spdk/build-pkg-manifest.txt ]]
00:00:53.507  + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/vfio-user-phy-autotest/output/
00:00:53.507  + for M in /var/spdk/build-*-manifest.txt
00:00:53.508  + [[ -f /var/spdk/build-repo-manifest.txt ]]
00:00:53.508  + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/vfio-user-phy-autotest/output/
00:00:53.508  ++ uname
00:00:53.508  + [[ Linux == \L\i\n\u\x ]]
00:00:53.508  + sudo dmesg -T
00:00:53.508  + sudo dmesg --clear
00:00:53.508  + dmesg_pid=4134983
00:00:53.508  + [[ Fedora Linux == FreeBSD ]]
00:00:53.508  + export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:00:53.508  + UNBIND_ENTIRE_IOMMU_GROUP=yes
00:00:53.508  + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:00:53.508  + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:00:53.508  + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:00:53.508  + [[ -x /usr/src/fio-static/fio ]]
00:00:53.508  + export FIO_BIN=/usr/src/fio-static/fio
00:00:53.508  + FIO_BIN=/usr/src/fio-static/fio
00:00:53.508  + [[ /var/jenkins/workspace/vfio-user-phy-autotest/qemu_vfio/bin/qemu-system-x86_64 == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\v\f\i\o\-\u\s\e\r\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]]
00:00:53.508  + sudo dmesg -Tw
00:00:53.508  ++ ldd /var/jenkins/workspace/vfio-user-phy-autotest/qemu_vfio/bin/qemu-system-x86_64
00:00:53.508  + deps='	linux-vdso.so.1 (0x00007fff943ab000)
00:00:53.508  	libpixman-1.so.0 => /usr/lib64/libpixman-1.so.0 (0x00007f90b1dac000)
00:00:53.508  	libz.so.1 => /usr/lib64/libz.so.1 (0x00007f90b1d92000)
00:00:53.508  	libudev.so.1 => /usr/lib64/libudev.so.1 (0x00007f90b1d5b000)
00:00:53.508  	libpmem.so.1 => /usr/lib64/libpmem.so.1 (0x00007f90b1d02000)
00:00:53.508  	libdaxctl.so.1 => /usr/lib64/libdaxctl.so.1 (0x00007f90b1cf5000)
00:00:53.508  	libnuma.so.1 => /usr/lib64/libnuma.so.1 (0x00007f90b1ce6000)
00:00:53.508  	libgio-2.0.so.0 => /usr/lib64/libgio-2.0.so.0 (0x00007f90b1b0c000)
00:00:53.508  	libgobject-2.0.so.0 => /usr/lib64/libgobject-2.0.so.0 (0x00007f90b1aac000)
00:00:53.508  	libglib-2.0.so.0 => /usr/lib64/libglib-2.0.so.0 (0x00007f90b1962000)
00:00:53.508  	librdmacm.so.1 => /usr/lib64/librdmacm.so.1 (0x00007f90b1946000)
00:00:53.508  	libibverbs.so.1 => /usr/lib64/libibverbs.so.1 (0x00007f90b1924000)
00:00:53.508  	libslirp.so.0 => /usr/lib64/libslirp.so.0 (0x00007f90b1902000)
00:00:53.508  	libbpf.so.0 => not found
00:00:53.508  	libncursesw.so.6 => /usr/lib64/libncursesw.so.6 (0x00007f90b18c1000)
00:00:53.508  	libtinfo.so.6 => /usr/lib64/libtinfo.so.6 (0x00007f90b188c000)
00:00:53.508  	libgmodule-2.0.so.0 => /usr/lib64/libgmodule-2.0.so.0 (0x00007f90b1885000)
00:00:53.508  	liburing.so.2 => /usr/lib64/liburing.so.2 (0x00007f90b187d000)
00:00:53.508  	libfuse3.so.3 => /usr/lib64/libfuse3.so.3 (0x00007f90b183b000)
00:00:53.508  	libiscsi.so.9 => /usr/lib64/iscsi/libiscsi.so.9 (0x00007f90b180b000)
00:00:53.508  	libaio.so.1 => /usr/lib64/libaio.so.1 (0x00007f90b1806000)
00:00:53.508  	librbd.so.1 => /usr/lib64/librbd.so.1 (0x00007f90b0f4b000)
00:00:53.508  	librados.so.2 => /usr/lib64/librados.so.2 (0x00007f90b0d83000)
00:00:53.508  	libm.so.6 => /usr/lib64/libm.so.6 (0x00007f90b0ca2000)
00:00:53.508  	libgcc_s.so.1 => /usr/lib64/libgcc_s.so.1 (0x00007f90b0c7d000)
00:00:53.508  	libc.so.6 => /usr/lib64/libc.so.6 (0x00007f90b0a99000)
00:00:53.508  	/lib64/ld-linux-x86-64.so.2 (0x00007f90b2f10000)
00:00:53.508  	libcap.so.2 => /usr/lib64/libcap.so.2 (0x00007f90b0a8f000)
00:00:53.508  	libndctl.so.6 => /usr/lib64/libndctl.so.6 (0x00007f90b0a62000)
00:00:53.508  	libuuid.so.1 => /usr/lib64/libuuid.so.1 (0x00007f90b0a58000)
00:00:53.508  	libkmod.so.2 => /usr/lib64/libkmod.so.2 (0x00007f90b0a3c000)
00:00:53.508  	libmount.so.1 => /usr/lib64/libmount.so.1 (0x00007f90b09e9000)
00:00:53.508  	libselinux.so.1 => /usr/lib64/libselinux.so.1 (0x00007f90b09bc000)
00:00:53.508  	libffi.so.8 => /usr/lib64/libffi.so.8 (0x00007f90b09ac000)
00:00:53.508  	libpcre2-8.so.0 => /usr/lib64/libpcre2-8.so.0 (0x00007f90b0911000)
00:00:53.508  	libnl-3.so.200 => /usr/lib64/libnl-3.so.200 (0x00007f90b08ec000)
00:00:53.508  	libnl-route-3.so.200 => /usr/lib64/libnl-route-3.so.200 (0x00007f90b0854000)
00:00:53.508  	libgcrypt.so.20 => /usr/lib64/libgcrypt.so.20 (0x00007f90b071a000)
00:00:53.508  	libssl.so.3 => /usr/lib64/libssl.so.3 (0x00007f90b0677000)
00:00:53.508  	libcryptsetup.so.12 => /usr/lib64/libcryptsetup.so.12 (0x00007f90b05f6000)
00:00:53.508  	libceph-common.so.2 => /usr/lib64/ceph/libceph-common.so.2 (0x00007f90af9c6000)
00:00:53.508  	libcrypto.so.3 => /usr/lib64/libcrypto.so.3 (0x00007f90af4ed000)
00:00:53.508  	libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00007f90af297000)
00:00:53.508  	libzstd.so.1 => /usr/lib64/libzstd.so.1 (0x00007f90af1d8000)
00:00:53.508  	liblzma.so.5 => /usr/lib64/liblzma.so.5 (0x00007f90af1a5000)
00:00:53.508  	libblkid.so.1 => /usr/lib64/libblkid.so.1 (0x00007f90af169000)
00:00:53.508  	libgpg-error.so.0 => /usr/lib64/libgpg-error.so.0 (0x00007f90af143000)
00:00:53.508  	libdevmapper.so.1.02 => /usr/lib64/libdevmapper.so.1.02 (0x00007f90af0e4000)
00:00:53.508  	libargon2.so.1 => /usr/lib64/libargon2.so.1 (0x00007f90af0dc000)
00:00:53.508  	libjson-c.so.5 => /usr/lib64/libjson-c.so.5 (0x00007f90af0c8000)
00:00:53.508  	libresolv.so.2 => /usr/lib64/libresolv.so.2 (0x00007f90af0b7000)
00:00:53.508  	libcurl.so.4 => /usr/lib64/libcurl.so.4 (0x00007f90af003000)
00:00:53.508  	libthrift-0.15.0.so => /usr/lib64/libthrift-0.15.0.so (0x00007f90aef69000)
00:00:53.508  	libnghttp2.so.14 => /usr/lib64/libnghttp2.so.14 (0x00007f90aef3c000)
00:00:53.508  	libidn2.so.0 => /usr/lib64/libidn2.so.0 (0x00007f90aef1a000)
00:00:53.508  	libssh.so.4 => /usr/lib64/libssh.so.4 (0x00007f90aeea7000)
00:00:53.508  	libpsl.so.5 => /usr/lib64/libpsl.so.5 (0x00007f90aee93000)
00:00:53.508  	libgssapi_krb5.so.2 => /usr/lib64/libgssapi_krb5.so.2 (0x00007f90aee3d000)
00:00:53.508  	libldap.so.2 => /usr/lib64/libldap.so.2 (0x00007f90aedd6000)
00:00:53.508  	liblber.so.2 => /usr/lib64/liblber.so.2 (0x00007f90aedc4000)
00:00:53.508  	libbrotlidec.so.1 => /usr/lib64/libbrotlidec.so.1 (0x00007f90aedb6000)
00:00:53.508  	libunistring.so.5 => /usr/lib64/libunistring.so.5 (0x00007f90aec06000)
00:00:53.508  	libkrb5.so.3 => /usr/lib64/libkrb5.so.3 (0x00007f90aeb2d000)
00:00:53.508  	libk5crypto.so.3 => /usr/lib64/libk5crypto.so.3 (0x00007f90aeb13000)
00:00:53.508  	libcom_err.so.2 => /usr/lib64/libcom_err.so.2 (0x00007f90aeb0c000)
00:00:53.508  	libkrb5support.so.0 => /usr/lib64/libkrb5support.so.0 (0x00007f90aeafc000)
00:00:53.508  	libkeyutils.so.1 => /usr/lib64/libkeyutils.so.1 (0x00007f90aeaf5000)
00:00:53.508  	libevent-2.1.so.7 => /usr/lib64/libevent-2.1.so.7 (0x00007f90aea9d000)
00:00:53.508  	libsasl2.so.3 => /usr/lib64/libsasl2.so.3 (0x00007f90aea7e000)
00:00:53.508  	libbrotlicommon.so.1 => /usr/lib64/libbrotlicommon.so.1 (0x00007f90aea59000)
00:00:53.508  	libcrypt.so.2 => /usr/lib64/libcrypt.so.2 (0x00007f90aea20000)'
00:00:53.508  + [[ 	linux-vdso.so.1 (0x00007fff943ab000)
00:00:53.508  	libpixman-1.so.0 => /usr/lib64/libpixman-1.so.0 (0x00007f90b1dac000)
00:00:53.508  	libz.so.1 => /usr/lib64/libz.so.1 (0x00007f90b1d92000)
00:00:53.508  	libudev.so.1 => /usr/lib64/libudev.so.1 (0x00007f90b1d5b000)
00:00:53.508  	libpmem.so.1 => /usr/lib64/libpmem.so.1 (0x00007f90b1d02000)
00:00:53.508  	libdaxctl.so.1 => /usr/lib64/libdaxctl.so.1 (0x00007f90b1cf5000)
00:00:53.508  	libnuma.so.1 => /usr/lib64/libnuma.so.1 (0x00007f90b1ce6000)
00:00:53.508  	libgio-2.0.so.0 => /usr/lib64/libgio-2.0.so.0 (0x00007f90b1b0c000)
00:00:53.508  	libgobject-2.0.so.0 => /usr/lib64/libgobject-2.0.so.0 (0x00007f90b1aac000)
00:00:53.508  	libglib-2.0.so.0 => /usr/lib64/libglib-2.0.so.0 (0x00007f90b1962000)
00:00:53.508  	librdmacm.so.1 => /usr/lib64/librdmacm.so.1 (0x00007f90b1946000)
00:00:53.508  	libibverbs.so.1 => /usr/lib64/libibverbs.so.1 (0x00007f90b1924000)
00:00:53.508  	libslirp.so.0 => /usr/lib64/libslirp.so.0 (0x00007f90b1902000)
00:00:53.508  	libbpf.so.0 => not found
00:00:53.508  	libncursesw.so.6 => /usr/lib64/libncursesw.so.6 (0x00007f90b18c1000)
00:00:53.508  	libtinfo.so.6 => /usr/lib64/libtinfo.so.6 (0x00007f90b188c000)
00:00:53.508  	libgmodule-2.0.so.0 => /usr/lib64/libgmodule-2.0.so.0 (0x00007f90b1885000)
00:00:53.508  	liburing.so.2 => /usr/lib64/liburing.so.2 (0x00007f90b187d000)
00:00:53.508  	libfuse3.so.3 => /usr/lib64/libfuse3.so.3 (0x00007f90b183b000)
00:00:53.508  	libiscsi.so.9 => /usr/lib64/iscsi/libiscsi.so.9 (0x00007f90b180b000)
00:00:53.508  	libaio.so.1 => /usr/lib64/libaio.so.1 (0x00007f90b1806000)
00:00:53.508  	librbd.so.1 => /usr/lib64/librbd.so.1 (0x00007f90b0f4b000)
00:00:53.508  	librados.so.2 => /usr/lib64/librados.so.2 (0x00007f90b0d83000)
00:00:53.508  	libm.so.6 => /usr/lib64/libm.so.6 (0x00007f90b0ca2000)
00:00:53.508  	libgcc_s.so.1 => /usr/lib64/libgcc_s.so.1 (0x00007f90b0c7d000)
00:00:53.508  	libc.so.6 => /usr/lib64/libc.so.6 (0x00007f90b0a99000)
00:00:53.508  	/lib64/ld-linux-x86-64.so.2 (0x00007f90b2f10000)
00:00:53.508  	libcap.so.2 => /usr/lib64/libcap.so.2 (0x00007f90b0a8f000)
00:00:53.508  	libndctl.so.6 => /usr/lib64/libndctl.so.6 (0x00007f90b0a62000)
00:00:53.508  	libuuid.so.1 => /usr/lib64/libuuid.so.1 (0x00007f90b0a58000)
00:00:53.508  	libkmod.so.2 => /usr/lib64/libkmod.so.2 (0x00007f90b0a3c000)
00:00:53.508  	libmount.so.1 => /usr/lib64/libmount.so.1 (0x00007f90b09e9000)
00:00:53.508  	libselinux.so.1 => /usr/lib64/libselinux.so.1 (0x00007f90b09bc000)
00:00:53.509  	libffi.so.8 => /usr/lib64/libffi.so.8 (0x00007f90b09ac000)
00:00:53.509  	libpcre2-8.so.0 => /usr/lib64/libpcre2-8.so.0 (0x00007f90b0911000)
00:00:53.509  	libnl-3.so.200 => /usr/lib64/libnl-3.so.200 (0x00007f90b08ec000)
00:00:53.509  	libnl-route-3.so.200 => /usr/lib64/libnl-route-3.so.200 (0x00007f90b0854000)
00:00:53.509  	libgcrypt.so.20 => /usr/lib64/libgcrypt.so.20 (0x00007f90b071a000)
00:00:53.509  	libssl.so.3 => /usr/lib64/libssl.so.3 (0x00007f90b0677000)
00:00:53.509  	libcryptsetup.so.12 => /usr/lib64/libcryptsetup.so.12 (0x00007f90b05f6000)
00:00:53.509  	libceph-common.so.2 => /usr/lib64/ceph/libceph-common.so.2 (0x00007f90af9c6000)
00:00:53.509  	libcrypto.so.3 => /usr/lib64/libcrypto.so.3 (0x00007f90af4ed000)
00:00:53.509  	libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00007f90af297000)
00:00:53.509  	libzstd.so.1 => /usr/lib64/libzstd.so.1 (0x00007f90af1d8000)
00:00:53.509  	liblzma.so.5 => /usr/lib64/liblzma.so.5 (0x00007f90af1a5000)
00:00:53.509  	libblkid.so.1 => /usr/lib64/libblkid.so.1 (0x00007f90af169000)
00:00:53.509  	libgpg-error.so.0 => /usr/lib64/libgpg-error.so.0 (0x00007f90af143000)
00:00:53.509  	libdevmapper.so.1.02 => /usr/lib64/libdevmapper.so.1.02 (0x00007f90af0e4000)
00:00:53.509  	libargon2.so.1 => /usr/lib64/libargon2.so.1 (0x00007f90af0dc000)
00:00:53.509  	libjson-c.so.5 => /usr/lib64/libjson-c.so.5 (0x00007f90af0c8000)
00:00:53.509  	libresolv.so.2 => /usr/lib64/libresolv.so.2 (0x00007f90af0b7000)
00:00:53.509  	libcurl.so.4 => /usr/lib64/libcurl.so.4 (0x00007f90af003000)
00:00:53.509  	libthrift-0.15.0.so => /usr/lib64/libthrift-0.15.0.so (0x00007f90aef69000)
00:00:53.509  	libnghttp2.so.14 => /usr/lib64/libnghttp2.so.14 (0x00007f90aef3c000)
00:00:53.509  	libidn2.so.0 => /usr/lib64/libidn2.so.0 (0x00007f90aef1a000)
00:00:53.509  	libssh.so.4 => /usr/lib64/libssh.so.4 (0x00007f90aeea7000)
00:00:53.509  	libpsl.so.5 => /usr/lib64/libpsl.so.5 (0x00007f90aee93000)
00:00:53.509  	libgssapi_krb5.so.2 => /usr/lib64/libgssapi_krb5.so.2 (0x00007f90aee3d000)
00:00:53.509  	libldap.so.2 => /usr/lib64/libldap.so.2 (0x00007f90aedd6000)
00:00:53.509  	liblber.so.2 => /usr/lib64/liblber.so.2 (0x00007f90aedc4000)
00:00:53.509  	libbrotlidec.so.1 => /usr/lib64/libbrotlidec.so.1 (0x00007f90aedb6000)
00:00:53.509  	libunistring.so.5 => /usr/lib64/libunistring.so.5 (0x00007f90aec06000)
00:00:53.509  	libkrb5.so.3 => /usr/lib64/libkrb5.so.3 (0x00007f90aeb2d000)
00:00:53.509  	libk5crypto.so.3 => /usr/lib64/libk5crypto.so.3 (0x00007f90aeb13000)
00:00:53.509  	libcom_err.so.2 => /usr/lib64/libcom_err.so.2 (0x00007f90aeb0c000)
00:00:53.509  	libkrb5support.so.0 => /usr/lib64/libkrb5support.so.0 (0x00007f90aeafc000)
00:00:53.509  	libkeyutils.so.1 => /usr/lib64/libkeyutils.so.1 (0x00007f90aeaf5000)
00:00:53.509  	libevent-2.1.so.7 => /usr/lib64/libevent-2.1.so.7 (0x00007f90aea9d000)
00:00:53.509  	libsasl2.so.3 => /usr/lib64/libsasl2.so.3 (0x00007f90aea7e000)
00:00:53.509  	libbrotlicommon.so.1 => /usr/lib64/libbrotlicommon.so.1 (0x00007f90aea59000)
00:00:53.509  	libcrypt.so.2 => /usr/lib64/libcrypt.so.2 (0x00007f90aea20000) == *\n\o\t\ \f\o\u\n\d* ]]
00:00:53.509  + unset -v VFIO_QEMU_BIN
00:00:53.509  + [[ ! -v VFIO_QEMU_BIN ]]
00:00:53.509  + [[ -e /usr/local/qemu/vfio-user-latest ]]
00:00:53.509  + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:00:53.509  + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:00:53.509  + [[ -e /usr/local/qemu/vanilla-latest ]]
00:00:53.509  + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:00:53.509  + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:00:53.509  + spdk/autorun.sh /var/jenkins/workspace/vfio-user-phy-autotest/autorun-spdk.conf
00:00:53.509    22:27:54  -- common/autotest_common.sh@1710 -- $ [[ n == y ]]
00:00:53.509   22:27:54  -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/vfio-user-phy-autotest/autorun-spdk.conf
00:00:53.509    22:27:54  -- vfio-user-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1
00:00:53.509    22:27:54  -- vfio-user-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_VFIOUSER_QEMU=1
00:00:53.509    22:27:54  -- vfio-user-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_RUN_ASAN=1
00:00:53.509    22:27:54  -- vfio-user-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_RUN_UBSAN=1
00:00:53.509    22:27:54  -- vfio-user-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_SMA=1
00:00:53.509    22:27:54  -- vfio-user-phy-autotest/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0
00:00:53.509   22:27:54  -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT
00:00:53.509   22:27:54  -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/vfio-user-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/vfio-user-phy-autotest/autorun-spdk.conf
00:00:53.768     22:27:54  -- common/autotest_common.sh@1710 -- $ [[ n == y ]]
00:00:53.768    22:27:54  -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/common.sh
00:00:53.768     22:27:54  -- scripts/common.sh@15 -- $ shopt -s extglob
00:00:53.768     22:27:54  -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]]
00:00:53.768     22:27:54  -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:00:53.768     22:27:54  -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh
00:00:53.768      22:27:54  -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:00:53.768      22:27:54  -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:00:53.768      22:27:54  -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:00:53.768      22:27:54  -- paths/export.sh@5 -- $ export PATH
00:00:53.769      22:27:54  -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:00:53.769    22:27:54  -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output
00:00:53.769      22:27:54  -- common/autobuild_common.sh@493 -- $ date +%s
00:00:53.769     22:27:54  -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733866074.XXXXXX
00:00:53.769    22:27:54  -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733866074.8JImeA
00:00:53.769    22:27:54  -- common/autobuild_common.sh@495 -- $ [[ -n '' ]]
00:00:53.769    22:27:54  -- common/autobuild_common.sh@499 -- $ '[' -n '' ']'
00:00:53.769    22:27:54  -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/'
00:00:53.769    22:27:54  -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/vfio-user-phy-autotest/spdk/xnvme --exclude /tmp'
00:00:53.769    22:27:54  -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/vfio-user-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs'
00:00:53.769     22:27:54  -- common/autobuild_common.sh@509 -- $ get_config_params
00:00:53.769     22:27:54  -- common/autotest_common.sh@409 -- $ xtrace_disable
00:00:53.769     22:27:54  -- common/autotest_common.sh@10 -- $ set +x
00:00:53.769    22:27:54  -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-sma --with-crypto'
00:00:53.769    22:27:54  -- common/autobuild_common.sh@511 -- $ start_monitor_resources
00:00:53.769    22:27:54  -- pm/common@17 -- $ local monitor
00:00:53.769    22:27:54  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:00:53.769    22:27:54  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:00:53.769    22:27:54  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:00:53.769     22:27:54  -- pm/common@21 -- $ date +%s
00:00:53.769    22:27:54  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:00:53.769    22:27:54  -- pm/common@25 -- $ sleep 1
00:00:53.769     22:27:54  -- pm/common@21 -- $ date +%s
00:00:53.769     22:27:54  -- pm/common@21 -- $ date +%s
00:00:53.769     22:27:54  -- pm/common@21 -- $ date +%s
00:00:53.769    22:27:54  -- pm/common@21 -- $ /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733866074
00:00:53.769    22:27:54  -- pm/common@21 -- $ /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733866074
00:00:53.769    22:27:54  -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733866074
00:00:53.769    22:27:54  -- pm/common@21 -- $ /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733866074
00:00:53.769  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733866074_collect-cpu-load.pm.log
00:00:53.769  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733866074_collect-vmstat.pm.log
00:00:53.769  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733866074_collect-cpu-temp.pm.log
00:00:53.769  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733866074_collect-bmc-pm.bmc.pm.log
00:00:54.706    22:27:55  -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT
00:00:54.706   22:27:55  -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD=
00:00:54.706   22:27:55  -- spdk/autobuild.sh@12 -- $ umask 022
00:00:54.706   22:27:55  -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:00:54.706   22:27:55  -- spdk/autobuild.sh@16 -- $ date -u
00:00:54.706  Tue Dec 10 09:27:55 PM UTC 2024
00:00:54.706   22:27:55  -- spdk/autobuild.sh@17 -- $ git describe --tags
00:00:54.706  v25.01-pre-329-g626389917
00:00:54.706   22:27:55  -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']'
00:00:54.706   22:27:55  -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan'
00:00:54.706   22:27:55  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:00:54.706   22:27:55  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:00:54.706   22:27:55  -- common/autotest_common.sh@10 -- $ set +x
00:00:54.706  ************************************
00:00:54.706  START TEST asan
00:00:54.706  ************************************
00:00:54.706   22:27:55 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan'
00:00:54.706  using asan
00:00:54.706  
00:00:54.706  real	0m0.000s
00:00:54.706  user	0m0.000s
00:00:54.706  sys	0m0.000s
00:00:54.706   22:27:55 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:00:54.706   22:27:55 asan -- common/autotest_common.sh@10 -- $ set +x
00:00:54.706  ************************************
00:00:54.706  END TEST asan
00:00:54.706  ************************************
00:00:54.706   22:27:55  -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']'
00:00:54.706   22:27:55  -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan'
00:00:54.706   22:27:55  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:00:54.706   22:27:55  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:00:54.706   22:27:55  -- common/autotest_common.sh@10 -- $ set +x
00:00:54.706  ************************************
00:00:54.706  START TEST ubsan
00:00:54.706  ************************************
00:00:54.706   22:27:55 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan'
00:00:54.706  using ubsan
00:00:54.706  
00:00:54.706  real	0m0.000s
00:00:54.706  user	0m0.000s
00:00:54.706  sys	0m0.000s
00:00:54.706   22:27:55 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:00:54.706   22:27:55 ubsan -- common/autotest_common.sh@10 -- $ set +x
00:00:54.706  ************************************
00:00:54.706  END TEST ubsan
00:00:54.706  ************************************
00:00:54.706   22:27:55  -- spdk/autobuild.sh@27 -- $ '[' -n '' ']'
00:00:54.706   22:27:55  -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in
00:00:54.706   22:27:55  -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]]
00:00:54.706   22:27:55  -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]]
00:00:54.706   22:27:55  -- spdk/autobuild.sh@55 -- $ [[ -n '' ]]
00:00:54.706   22:27:55  -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]]
00:00:54.706   22:27:55  -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]]
00:00:54.706   22:27:55  -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]]
00:00:54.706   22:27:55  -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/vfio-user-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-sma --with-crypto --with-shared
00:00:54.964  Using default SPDK env in /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk
00:00:54.964  Using default DPDK in /var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/build
00:00:54.964  Using 'verbs' RDMA provider
00:01:04.021  Configuring ISA-L (logfile: /var/jenkins/workspace/vfio-user-phy-autotest/spdk/.spdk-isal.log)...done.
00:01:12.140  Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/vfio-user-phy-autotest/spdk/.spdk-isal-crypto.log)...done.
00:01:12.140  Creating mk/config.mk...done.
00:01:12.140  Creating mk/cc.flags.mk...done.
00:01:12.140  Type 'make' to build.
00:01:12.140   22:28:12  -- spdk/autobuild.sh@70 -- $ run_test make make -j88
00:01:12.140   22:28:12  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:01:12.140   22:28:12  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:01:12.140   22:28:12  -- common/autotest_common.sh@10 -- $ set +x
00:01:12.140  ************************************
00:01:12.140  START TEST make
00:01:12.140  ************************************
00:01:12.140   22:28:12 make -- common/autotest_common.sh@1129 -- $ make -j88
00:01:12.140  make[1]: Nothing to be done for 'all'.
00:01:13.528  The Meson build system
00:01:13.528  Version: 1.5.0
00:01:13.528  Source dir: /var/jenkins/workspace/vfio-user-phy-autotest/spdk/libvfio-user
00:01:13.528  Build dir: /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user/build-debug
00:01:13.528  Build type: native build
00:01:13.528  Project name: libvfio-user
00:01:13.528  Project version: 0.0.1
00:01:13.528  C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)")
00:01:13.528  C linker for the host machine: cc ld.bfd 2.40-14
00:01:13.528  Host machine cpu family: x86_64
00:01:13.528  Host machine cpu: x86_64
00:01:13.528  Run-time dependency threads found: YES
00:01:13.528  Library dl found: YES
00:01:13.528  Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5
00:01:13.528  Run-time dependency json-c found: YES 0.17
00:01:13.528  Run-time dependency cmocka found: YES 1.1.7
00:01:13.528  Program pytest-3 found: NO
00:01:13.528  Program flake8 found: NO
00:01:13.528  Program misspell-fixer found: NO
00:01:13.528  Program restructuredtext-lint found: NO
00:01:13.528  Program valgrind found: YES (/usr/bin/valgrind)
00:01:13.528  Compiler for C supports arguments -Wno-missing-field-initializers: YES 
00:01:13.528  Compiler for C supports arguments -Wmissing-declarations: YES 
00:01:13.528  Compiler for C supports arguments -Wwrite-strings: YES 
00:01:13.528  ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup.
00:01:13.528  Program test-lspci.sh found: YES (/var/jenkins/workspace/vfio-user-phy-autotest/spdk/libvfio-user/test/test-lspci.sh)
00:01:13.528  Program test-linkage.sh found: YES (/var/jenkins/workspace/vfio-user-phy-autotest/spdk/libvfio-user/test/test-linkage.sh)
00:01:13.528  ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup.
00:01:13.528  Build targets in project: 8
00:01:13.528  WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions:
00:01:13.528   * 0.57.0: {'exclude_suites arg in add_test_setup'}
00:01:13.528  
00:01:13.528  libvfio-user 0.0.1
00:01:13.528  
00:01:13.528    User defined options
00:01:13.528      buildtype      : debug
00:01:13.528      default_library: shared
00:01:13.528      libdir         : /usr/local/lib
00:01:13.528  
00:01:13.528  Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja
00:01:14.103  ninja: Entering directory `/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user/build-debug'
00:01:14.103  [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o
00:01:14.103  [2/37] Compiling C object samples/lspci.p/lspci.c.o
00:01:14.103  [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o
00:01:14.103  [4/37] Compiling C object samples/null.p/null.c.o
00:01:14.103  [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o
00:01:14.103  [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o
00:01:14.103  [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o
00:01:14.103  [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o
00:01:14.103  [9/37] Compiling C object samples/client.p/.._lib_migration.c.o
00:01:14.367  [10/37] Compiling C object samples/client.p/.._lib_tran.c.o
00:01:14.367  [11/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o
00:01:14.367  [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o
00:01:14.367  [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o
00:01:14.367  [14/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o
00:01:14.367  [15/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o
00:01:14.367  [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o
00:01:14.367  [17/37] Compiling C object test/unit_tests.p/mocks.c.o
00:01:14.367  [18/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o
00:01:14.367  [19/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o
00:01:14.367  [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o
00:01:14.367  [21/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o
00:01:14.367  [22/37] Compiling C object samples/server.p/server.c.o
00:01:14.367  [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o
00:01:14.367  [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o
00:01:14.367  [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o
00:01:14.367  [26/37] Compiling C object samples/client.p/client.c.o
00:01:14.367  [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o
00:01:14.367  [28/37] Linking target samples/client
00:01:14.367  [29/37] Linking target lib/libvfio-user.so.0.0.1
00:01:14.631  [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o
00:01:14.631  [31/37] Linking target test/unit_tests
00:01:14.631  [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols
00:01:14.631  [33/37] Linking target samples/shadow_ioeventfd_server
00:01:14.631  [34/37] Linking target samples/lspci
00:01:14.631  [35/37] Linking target samples/gpio-pci-idio-16
00:01:14.898  [36/37] Linking target samples/server
00:01:14.898  [37/37] Linking target samples/null
00:01:14.898  INFO: autodetecting backend as ninja
00:01:14.898  INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user/build-debug
00:01:14.898  DESTDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user/build-debug
00:01:15.843  ninja: Entering directory `/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user/build-debug'
00:01:15.843  ninja: no work to do.
00:01:47.925  The Meson build system
00:01:47.925  Version: 1.5.0
00:01:47.925  Source dir: /var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk
00:01:47.925  Build dir: /var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/build-tmp
00:01:47.925  Build type: native build
00:01:47.925  Program cat found: YES (/usr/bin/cat)
00:01:47.925  Project name: DPDK
00:01:47.925  Project version: 24.03.0
00:01:47.925  C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)")
00:01:47.925  C linker for the host machine: cc ld.bfd 2.40-14
00:01:47.925  Host machine cpu family: x86_64
00:01:47.925  Host machine cpu: x86_64
00:01:47.925  Message: ## Building in Developer Mode ##
00:01:47.925  Program pkg-config found: YES (/usr/bin/pkg-config)
00:01:47.925  Program check-symbols.sh found: YES (/var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh)
00:01:47.925  Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh)
00:01:47.925  Program python3 found: YES (/usr/bin/python3)
00:01:47.925  Program cat found: YES (/usr/bin/cat)
00:01:47.925  Compiler for C supports arguments -march=native: YES 
00:01:47.925  Checking for size of "void *" : 8 
00:01:47.925  Checking for size of "void *" : 8 (cached)
00:01:47.925  Compiler for C supports link arguments -Wl,--undefined-version: YES 
00:01:47.925  Library m found: YES
00:01:47.925  Library numa found: YES
00:01:47.925  Has header "numaif.h" : YES 
00:01:47.925  Library fdt found: NO
00:01:47.925  Library execinfo found: NO
00:01:47.925  Has header "execinfo.h" : YES 
00:01:47.925  Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5
00:01:47.925  Run-time dependency libarchive found: NO (tried pkgconfig)
00:01:47.925  Run-time dependency libbsd found: NO (tried pkgconfig)
00:01:47.925  Run-time dependency jansson found: NO (tried pkgconfig)
00:01:47.925  Run-time dependency openssl found: YES 3.1.1
00:01:47.925  Run-time dependency libpcap found: YES 1.10.4
00:01:47.925  Has header "pcap.h" with dependency libpcap: YES 
00:01:47.925  Compiler for C supports arguments -Wcast-qual: YES 
00:01:47.925  Compiler for C supports arguments -Wdeprecated: YES 
00:01:47.925  Compiler for C supports arguments -Wformat: YES 
00:01:47.925  Compiler for C supports arguments -Wformat-nonliteral: NO 
00:01:47.925  Compiler for C supports arguments -Wformat-security: NO 
00:01:47.925  Compiler for C supports arguments -Wmissing-declarations: YES 
00:01:47.925  Compiler for C supports arguments -Wmissing-prototypes: YES 
00:01:47.925  Compiler for C supports arguments -Wnested-externs: YES 
00:01:47.925  Compiler for C supports arguments -Wold-style-definition: YES 
00:01:47.925  Compiler for C supports arguments -Wpointer-arith: YES 
00:01:47.925  Compiler for C supports arguments -Wsign-compare: YES 
00:01:47.925  Compiler for C supports arguments -Wstrict-prototypes: YES 
00:01:47.925  Compiler for C supports arguments -Wundef: YES 
00:01:47.925  Compiler for C supports arguments -Wwrite-strings: YES 
00:01:47.925  Compiler for C supports arguments -Wno-address-of-packed-member: YES 
00:01:47.925  Compiler for C supports arguments -Wno-packed-not-aligned: YES 
00:01:47.925  Compiler for C supports arguments -Wno-missing-field-initializers: YES 
00:01:47.925  Compiler for C supports arguments -Wno-zero-length-bounds: YES 
00:01:47.925  Program objdump found: YES (/usr/bin/objdump)
00:01:47.925  Compiler for C supports arguments -mavx512f: YES 
00:01:47.925  Checking if "AVX512 checking" compiles: YES 
00:01:47.925  Fetching value of define "__SSE4_2__" : 1 
00:01:47.925  Fetching value of define "__AES__" : 1 
00:01:47.925  Fetching value of define "__AVX__" : 1 
00:01:47.925  Fetching value of define "__AVX2__" : 1 
00:01:47.925  Fetching value of define "__AVX512BW__" : (undefined) 
00:01:47.925  Fetching value of define "__AVX512CD__" : (undefined) 
00:01:47.925  Fetching value of define "__AVX512DQ__" : (undefined) 
00:01:47.925  Fetching value of define "__AVX512F__" : (undefined) 
00:01:47.925  Fetching value of define "__AVX512VL__" : (undefined) 
00:01:47.925  Fetching value of define "__PCLMUL__" : 1 
00:01:47.925  Fetching value of define "__RDRND__" : 1 
00:01:47.925  Fetching value of define "__RDSEED__" : 1 
00:01:47.925  Fetching value of define "__VPCLMULQDQ__" : (undefined) 
00:01:47.925  Fetching value of define "__znver1__" : (undefined) 
00:01:47.925  Fetching value of define "__znver2__" : (undefined) 
00:01:47.925  Fetching value of define "__znver3__" : (undefined) 
00:01:47.925  Fetching value of define "__znver4__" : (undefined) 
00:01:47.925  Library asan found: YES
00:01:47.925  Compiler for C supports arguments -Wno-format-truncation: YES 
00:01:47.925  Message: lib/log: Defining dependency "log"
00:01:47.925  Message: lib/kvargs: Defining dependency "kvargs"
00:01:47.925  Message: lib/telemetry: Defining dependency "telemetry"
00:01:47.925  Library rt found: YES
00:01:47.925  Checking for function "getentropy" : NO 
00:01:47.925  Message: lib/eal: Defining dependency "eal"
00:01:47.925  Message: lib/ring: Defining dependency "ring"
00:01:47.925  Message: lib/rcu: Defining dependency "rcu"
00:01:47.925  Message: lib/mempool: Defining dependency "mempool"
00:01:47.925  Message: lib/mbuf: Defining dependency "mbuf"
00:01:47.925  Fetching value of define "__PCLMUL__" : 1 (cached)
00:01:47.925  Fetching value of define "__AVX512F__" : (undefined) (cached)
00:01:47.925  Compiler for C supports arguments -mpclmul: YES 
00:01:47.925  Compiler for C supports arguments -maes: YES 
00:01:47.925  Compiler for C supports arguments -mavx512f: YES (cached)
00:01:47.925  Compiler for C supports arguments -mavx512bw: YES 
00:01:47.925  Compiler for C supports arguments -mavx512dq: YES 
00:01:47.925  Compiler for C supports arguments -mavx512vl: YES 
00:01:47.925  Compiler for C supports arguments -mvpclmulqdq: YES 
00:01:47.925  Compiler for C supports arguments -mavx2: YES 
00:01:47.925  Compiler for C supports arguments -mavx: YES 
00:01:47.925  Message: lib/net: Defining dependency "net"
00:01:47.925  Message: lib/meter: Defining dependency "meter"
00:01:47.925  Message: lib/ethdev: Defining dependency "ethdev"
00:01:47.925  Message: lib/pci: Defining dependency "pci"
00:01:47.925  Message: lib/cmdline: Defining dependency "cmdline"
00:01:47.925  Message: lib/hash: Defining dependency "hash"
00:01:47.925  Message: lib/timer: Defining dependency "timer"
00:01:47.925  Message: lib/compressdev: Defining dependency "compressdev"
00:01:47.925  Message: lib/cryptodev: Defining dependency "cryptodev"
00:01:47.925  Message: lib/dmadev: Defining dependency "dmadev"
00:01:47.925  Compiler for C supports arguments -Wno-cast-qual: YES 
00:01:47.925  Message: lib/power: Defining dependency "power"
00:01:47.925  Message: lib/reorder: Defining dependency "reorder"
00:01:47.925  Message: lib/security: Defining dependency "security"
00:01:47.925  Has header "linux/userfaultfd.h" : YES 
00:01:47.925  Has header "linux/vduse.h" : YES 
00:01:47.925  Message: lib/vhost: Defining dependency "vhost"
00:01:47.925  Compiler for C supports arguments -Wno-format-truncation: YES (cached)
00:01:47.925  Message: drivers/bus/auxiliary: Defining dependency "bus_auxiliary"
00:01:47.925  Message: drivers/bus/pci: Defining dependency "bus_pci"
00:01:47.925  Message: drivers/bus/vdev: Defining dependency "bus_vdev"
00:01:47.925  Compiler for C supports arguments -std=c11: YES 
00:01:47.925  Compiler for C supports arguments -Wno-strict-prototypes: YES 
00:01:47.925  Compiler for C supports arguments -D_BSD_SOURCE: YES 
00:01:47.925  Compiler for C supports arguments -D_DEFAULT_SOURCE: YES 
00:01:47.925  Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES 
00:01:47.925  Run-time dependency libmlx5 found: YES 1.24.46.0
00:01:47.925  Run-time dependency libibverbs found: YES 1.14.46.0
00:01:47.925  Library mtcr_ul found: NO
00:01:47.925  Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_ESP" with dependencies libmlx5, libibverbs: YES 
00:01:47.925  Header "infiniband/verbs.h" has symbol "IBV_RX_HASH_IPSEC_SPI" with dependencies libmlx5, libibverbs: YES 
00:01:47.925  Header "infiniband/verbs.h" has symbol "IBV_ACCESS_RELAXED_ORDERING " with dependencies libmlx5, libibverbs: YES 
00:01:47.925  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQE_RES_FORMAT_CSUM_STRIDX" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_MASK_TUNNEL_OFFLOADS" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_MPW_ALLOWED" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_CQE_128B_COMP" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQ_INIT_ATTR_FLAGS_CQE_PAD" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_flow_action_packet_reformat" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_MPLS" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAGS_PCI_WRITE_END_PADDING" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAG_RX_END_PADDING" with dependencies libmlx5, libibverbs: NO 
00:01:47.926  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_devx_port" with dependencies libmlx5, libibverbs: NO 
00:01:47.926  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_port" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_ib_port" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_create" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_COUNTERS_DEVX" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_DEFAULT_MISS" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_query_async" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_qp_query" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_pp_alloc" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_devx_tir" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_get_event" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_meter" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "infiniband/mlx5dv.h" has symbol "MLX5_MMAP_GET_NC_PAGES_CMD" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_NIC_RX" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_FDB" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_push_vlan" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_alloc_var" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ENHANCED_MPSW" with dependencies libmlx5, libibverbs: NO 
00:01:47.926  Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_SEND_EN" with dependencies libmlx5, libibverbs: NO 
00:01:47.926  Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_WAIT" with dependencies libmlx5, libibverbs: NO 
00:01:47.926  Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ACCESS_ASO" with dependencies libmlx5, libibverbs: NO 
00:01:47.926  Header "linux/if_link.h" has symbol "IFLA_NUM_VF" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "linux/if_link.h" has symbol "IFLA_EXT_MASK" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "linux/if_link.h" has symbol "IFLA_PHYS_SWITCH_ID" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "linux/if_link.h" has symbol "IFLA_PHYS_PORT_NAME" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "rdma/rdma_netlink.h" has symbol "RDMA_NL_NLDEV" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_GET" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_PORT_GET" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_INDEX" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_NAME" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_INDEX" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_STATE" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_NDEV_INDEX" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_domain" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_sampler" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_set_reclaim_device_memory" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_array" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "linux/devlink.h" has symbol "DEVLINK_GENL_NAME" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_aso" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "infiniband/verbs.h" has symbol "INFINIBAND_VERBS_H" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "infiniband/mlx5dv.h" has symbol "MLX5_WQE_UMR_CTRL_FLAG_INLINE" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_rule" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_ACTION_FLAGS_ASO_CT_DIRECTION_INITIATOR" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_allow_duplicate_rules" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "infiniband/verbs.h" has symbol "ibv_reg_mr_iova" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "infiniband/verbs.h" has symbol "ibv_import_device" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_root_table" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_steering_anchor" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Header "infiniband/verbs.h" has symbol "ibv_is_fork_initialized" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Checking whether type "struct mlx5dv_sw_parsing_caps" has member "sw_parsing_offloads" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Checking whether type "struct ibv_counter_set_init_attr" has member "counter_set_id" with dependencies libmlx5, libibverbs: NO 
00:01:47.926  Checking whether type "struct ibv_counters_init_attr" has member "comp_mask" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Checking whether type "struct mlx5dv_devx_uar" has member "mmap_off" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Checking whether type "struct mlx5dv_flow_matcher_attr" has member "ft_type" with dependencies libmlx5, libibverbs: YES 
00:01:47.926  Configuring mlx5_autoconf.h using configuration
00:01:47.926  Message: drivers/common/mlx5: Defining dependency "common_mlx5"
00:01:47.926  Run-time dependency libcrypto found: YES 3.1.1
00:01:47.926  Library IPSec_MB found: YES
00:01:47.926  Fetching value of define "IMB_VERSION_STR" : "1.5.0" 
00:01:47.926  Message: drivers/common/qat: Defining dependency "common_qat"
00:01:47.926  Message: drivers/mempool/ring: Defining dependency "mempool_ring"
00:01:47.926  Message: Disabling raw/* drivers: missing internal dependency "rawdev"
00:01:47.926  Library IPSec_MB found: YES
00:01:47.926  Fetching value of define "IMB_VERSION_STR" : "1.5.0" (cached)
00:01:47.926  Message: drivers/crypto/ipsec_mb: Defining dependency "crypto_ipsec_mb"
00:01:47.926  Compiler for C supports arguments -std=c11: YES (cached)
00:01:47.926  Compiler for C supports arguments -Wno-strict-prototypes: YES (cached)
00:01:47.926  Compiler for C supports arguments -D_BSD_SOURCE: YES (cached)
00:01:47.926  Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached)
00:01:47.926  Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached)
00:01:47.926  Message: drivers/crypto/mlx5: Defining dependency "crypto_mlx5"
00:01:47.926  Message: Disabling regex/* drivers: missing internal dependency "regexdev"
00:01:47.926  Message: Disabling ml/* drivers: missing internal dependency "mldev"
00:01:47.926  Message: Disabling event/* drivers: missing internal dependency "eventdev"
00:01:47.926  Message: Disabling baseband/* drivers: missing internal dependency "bbdev"
00:01:47.926  Message: Disabling gpu/* drivers: missing internal dependency "gpudev"
00:01:47.926  Program doxygen found: YES (/usr/local/bin/doxygen)
00:01:47.926  Configuring doxy-api-html.conf using configuration
00:01:47.926  Configuring doxy-api-man.conf using configuration
00:01:47.926  Program mandb found: YES (/usr/bin/mandb)
00:01:47.926  Program sphinx-build found: NO
00:01:47.926  Configuring rte_build_config.h using configuration
00:01:47.926  Message: 
00:01:47.926  =================
00:01:47.926  Applications Enabled
00:01:47.926  =================
00:01:47.926  
00:01:47.926  apps:
00:01:47.926  	
00:01:47.926  
00:01:47.926  Message: 
00:01:47.926  =================
00:01:47.926  Libraries Enabled
00:01:47.926  =================
00:01:47.926  
00:01:47.926  libs:
00:01:47.926  	log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 
00:01:47.926  	net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 
00:01:47.926  	cryptodev, dmadev, power, reorder, security, vhost, 
00:01:47.926  
00:01:47.926  Message: 
00:01:47.926  ===============
00:01:47.926  Drivers Enabled
00:01:47.926  ===============
00:01:47.926  
00:01:47.926  common:
00:01:47.926  	mlx5, qat, 
00:01:47.926  bus:
00:01:47.926  	auxiliary, pci, vdev, 
00:01:47.926  mempool:
00:01:47.926  	ring, 
00:01:47.926  dma:
00:01:47.926  	
00:01:47.926  net:
00:01:47.926  	
00:01:47.926  crypto:
00:01:47.926  	ipsec_mb, mlx5, 
00:01:47.926  compress:
00:01:47.926  	
00:01:47.926  vdpa:
00:01:47.926  	
00:01:47.926  
00:01:47.926  Message: 
00:01:47.926  =================
00:01:47.926  Content Skipped
00:01:47.926  =================
00:01:47.926  
00:01:47.926  apps:
00:01:47.926  	dumpcap:	explicitly disabled via build config
00:01:47.926  	graph:	explicitly disabled via build config
00:01:47.926  	pdump:	explicitly disabled via build config
00:01:47.926  	proc-info:	explicitly disabled via build config
00:01:47.926  	test-acl:	explicitly disabled via build config
00:01:47.926  	test-bbdev:	explicitly disabled via build config
00:01:47.926  	test-cmdline:	explicitly disabled via build config
00:01:47.926  	test-compress-perf:	explicitly disabled via build config
00:01:47.926  	test-crypto-perf:	explicitly disabled via build config
00:01:47.926  	test-dma-perf:	explicitly disabled via build config
00:01:47.926  	test-eventdev:	explicitly disabled via build config
00:01:47.926  	test-fib:	explicitly disabled via build config
00:01:47.926  	test-flow-perf:	explicitly disabled via build config
00:01:47.927  	test-gpudev:	explicitly disabled via build config
00:01:47.927  	test-mldev:	explicitly disabled via build config
00:01:47.927  	test-pipeline:	explicitly disabled via build config
00:01:47.927  	test-pmd:	explicitly disabled via build config
00:01:47.927  	test-regex:	explicitly disabled via build config
00:01:47.927  	test-sad:	explicitly disabled via build config
00:01:47.927  	test-security-perf:	explicitly disabled via build config
00:01:47.927  	
00:01:47.927  libs:
00:01:47.927  	argparse:	explicitly disabled via build config
00:01:47.927  	metrics:	explicitly disabled via build config
00:01:47.927  	acl:	explicitly disabled via build config
00:01:47.927  	bbdev:	explicitly disabled via build config
00:01:47.927  	bitratestats:	explicitly disabled via build config
00:01:47.927  	bpf:	explicitly disabled via build config
00:01:47.927  	cfgfile:	explicitly disabled via build config
00:01:47.927  	distributor:	explicitly disabled via build config
00:01:47.927  	efd:	explicitly disabled via build config
00:01:47.927  	eventdev:	explicitly disabled via build config
00:01:47.927  	dispatcher:	explicitly disabled via build config
00:01:47.927  	gpudev:	explicitly disabled via build config
00:01:47.927  	gro:	explicitly disabled via build config
00:01:47.927  	gso:	explicitly disabled via build config
00:01:47.927  	ip_frag:	explicitly disabled via build config
00:01:47.927  	jobstats:	explicitly disabled via build config
00:01:47.927  	latencystats:	explicitly disabled via build config
00:01:47.927  	lpm:	explicitly disabled via build config
00:01:47.927  	member:	explicitly disabled via build config
00:01:47.927  	pcapng:	explicitly disabled via build config
00:01:47.927  	rawdev:	explicitly disabled via build config
00:01:47.927  	regexdev:	explicitly disabled via build config
00:01:47.927  	mldev:	explicitly disabled via build config
00:01:47.927  	rib:	explicitly disabled via build config
00:01:47.927  	sched:	explicitly disabled via build config
00:01:47.927  	stack:	explicitly disabled via build config
00:01:47.927  	ipsec:	explicitly disabled via build config
00:01:47.927  	pdcp:	explicitly disabled via build config
00:01:47.927  	fib:	explicitly disabled via build config
00:01:47.927  	port:	explicitly disabled via build config
00:01:47.927  	pdump:	explicitly disabled via build config
00:01:47.927  	table:	explicitly disabled via build config
00:01:47.927  	pipeline:	explicitly disabled via build config
00:01:47.927  	graph:	explicitly disabled via build config
00:01:47.927  	node:	explicitly disabled via build config
00:01:47.927  	
00:01:47.927  drivers:
00:01:47.927  	common/cpt:	not in enabled drivers build config
00:01:47.927  	common/dpaax:	not in enabled drivers build config
00:01:47.927  	common/iavf:	not in enabled drivers build config
00:01:47.927  	common/idpf:	not in enabled drivers build config
00:01:47.927  	common/ionic:	not in enabled drivers build config
00:01:47.927  	common/mvep:	not in enabled drivers build config
00:01:47.927  	common/octeontx:	not in enabled drivers build config
00:01:47.927  	bus/cdx:	not in enabled drivers build config
00:01:47.927  	bus/dpaa:	not in enabled drivers build config
00:01:47.927  	bus/fslmc:	not in enabled drivers build config
00:01:47.927  	bus/ifpga:	not in enabled drivers build config
00:01:47.927  	bus/platform:	not in enabled drivers build config
00:01:47.927  	bus/uacce:	not in enabled drivers build config
00:01:47.927  	bus/vmbus:	not in enabled drivers build config
00:01:47.927  	common/cnxk:	not in enabled drivers build config
00:01:47.927  	common/nfp:	not in enabled drivers build config
00:01:47.927  	common/nitrox:	not in enabled drivers build config
00:01:47.927  	common/sfc_efx:	not in enabled drivers build config
00:01:47.927  	mempool/bucket:	not in enabled drivers build config
00:01:47.927  	mempool/cnxk:	not in enabled drivers build config
00:01:47.927  	mempool/dpaa:	not in enabled drivers build config
00:01:47.927  	mempool/dpaa2:	not in enabled drivers build config
00:01:47.927  	mempool/octeontx:	not in enabled drivers build config
00:01:47.927  	mempool/stack:	not in enabled drivers build config
00:01:47.927  	dma/cnxk:	not in enabled drivers build config
00:01:47.927  	dma/dpaa:	not in enabled drivers build config
00:01:47.927  	dma/dpaa2:	not in enabled drivers build config
00:01:47.927  	dma/hisilicon:	not in enabled drivers build config
00:01:47.927  	dma/idxd:	not in enabled drivers build config
00:01:47.927  	dma/ioat:	not in enabled drivers build config
00:01:47.927  	dma/skeleton:	not in enabled drivers build config
00:01:47.927  	net/af_packet:	not in enabled drivers build config
00:01:47.927  	net/af_xdp:	not in enabled drivers build config
00:01:47.927  	net/ark:	not in enabled drivers build config
00:01:47.927  	net/atlantic:	not in enabled drivers build config
00:01:47.927  	net/avp:	not in enabled drivers build config
00:01:47.927  	net/axgbe:	not in enabled drivers build config
00:01:47.927  	net/bnx2x:	not in enabled drivers build config
00:01:47.927  	net/bnxt:	not in enabled drivers build config
00:01:47.927  	net/bonding:	not in enabled drivers build config
00:01:47.927  	net/cnxk:	not in enabled drivers build config
00:01:47.927  	net/cpfl:	not in enabled drivers build config
00:01:47.927  	net/cxgbe:	not in enabled drivers build config
00:01:47.927  	net/dpaa:	not in enabled drivers build config
00:01:47.927  	net/dpaa2:	not in enabled drivers build config
00:01:47.927  	net/e1000:	not in enabled drivers build config
00:01:47.927  	net/ena:	not in enabled drivers build config
00:01:47.927  	net/enetc:	not in enabled drivers build config
00:01:47.927  	net/enetfec:	not in enabled drivers build config
00:01:47.927  	net/enic:	not in enabled drivers build config
00:01:47.927  	net/failsafe:	not in enabled drivers build config
00:01:47.927  	net/fm10k:	not in enabled drivers build config
00:01:47.927  	net/gve:	not in enabled drivers build config
00:01:47.927  	net/hinic:	not in enabled drivers build config
00:01:47.927  	net/hns3:	not in enabled drivers build config
00:01:47.927  	net/i40e:	not in enabled drivers build config
00:01:47.927  	net/iavf:	not in enabled drivers build config
00:01:47.927  	net/ice:	not in enabled drivers build config
00:01:47.927  	net/idpf:	not in enabled drivers build config
00:01:47.927  	net/igc:	not in enabled drivers build config
00:01:47.927  	net/ionic:	not in enabled drivers build config
00:01:47.927  	net/ipn3ke:	not in enabled drivers build config
00:01:47.927  	net/ixgbe:	not in enabled drivers build config
00:01:47.927  	net/mana:	not in enabled drivers build config
00:01:47.927  	net/memif:	not in enabled drivers build config
00:01:47.927  	net/mlx4:	not in enabled drivers build config
00:01:47.927  	net/mlx5:	not in enabled drivers build config
00:01:47.927  	net/mvneta:	not in enabled drivers build config
00:01:47.927  	net/mvpp2:	not in enabled drivers build config
00:01:47.927  	net/netvsc:	not in enabled drivers build config
00:01:47.927  	net/nfb:	not in enabled drivers build config
00:01:47.927  	net/nfp:	not in enabled drivers build config
00:01:47.927  	net/ngbe:	not in enabled drivers build config
00:01:47.927  	net/null:	not in enabled drivers build config
00:01:47.927  	net/octeontx:	not in enabled drivers build config
00:01:47.927  	net/octeon_ep:	not in enabled drivers build config
00:01:47.927  	net/pcap:	not in enabled drivers build config
00:01:47.927  	net/pfe:	not in enabled drivers build config
00:01:47.927  	net/qede:	not in enabled drivers build config
00:01:47.927  	net/ring:	not in enabled drivers build config
00:01:47.927  	net/sfc:	not in enabled drivers build config
00:01:47.927  	net/softnic:	not in enabled drivers build config
00:01:47.927  	net/tap:	not in enabled drivers build config
00:01:47.927  	net/thunderx:	not in enabled drivers build config
00:01:47.927  	net/txgbe:	not in enabled drivers build config
00:01:47.927  	net/vdev_netvsc:	not in enabled drivers build config
00:01:47.927  	net/vhost:	not in enabled drivers build config
00:01:47.927  	net/virtio:	not in enabled drivers build config
00:01:47.927  	net/vmxnet3:	not in enabled drivers build config
00:01:47.927  	raw/*:	missing internal dependency, "rawdev"
00:01:47.927  	crypto/armv8:	not in enabled drivers build config
00:01:47.927  	crypto/bcmfs:	not in enabled drivers build config
00:01:47.927  	crypto/caam_jr:	not in enabled drivers build config
00:01:47.927  	crypto/ccp:	not in enabled drivers build config
00:01:47.927  	crypto/cnxk:	not in enabled drivers build config
00:01:47.927  	crypto/dpaa_sec:	not in enabled drivers build config
00:01:47.927  	crypto/dpaa2_sec:	not in enabled drivers build config
00:01:47.927  	crypto/mvsam:	not in enabled drivers build config
00:01:47.927  	crypto/nitrox:	not in enabled drivers build config
00:01:47.927  	crypto/null:	not in enabled drivers build config
00:01:47.927  	crypto/octeontx:	not in enabled drivers build config
00:01:47.927  	crypto/openssl:	not in enabled drivers build config
00:01:47.927  	crypto/scheduler:	not in enabled drivers build config
00:01:47.927  	crypto/uadk:	not in enabled drivers build config
00:01:47.927  	crypto/virtio:	not in enabled drivers build config
00:01:47.927  	compress/isal:	not in enabled drivers build config
00:01:47.927  	compress/mlx5:	not in enabled drivers build config
00:01:47.927  	compress/nitrox:	not in enabled drivers build config
00:01:47.927  	compress/octeontx:	not in enabled drivers build config
00:01:47.927  	compress/zlib:	not in enabled drivers build config
00:01:47.927  	regex/*:	missing internal dependency, "regexdev"
00:01:47.927  	ml/*:	missing internal dependency, "mldev"
00:01:47.927  	vdpa/ifc:	not in enabled drivers build config
00:01:47.927  	vdpa/mlx5:	not in enabled drivers build config
00:01:47.927  	vdpa/nfp:	not in enabled drivers build config
00:01:47.927  	vdpa/sfc:	not in enabled drivers build config
00:01:47.927  	event/*:	missing internal dependency, "eventdev"
00:01:47.927  	baseband/*:	missing internal dependency, "bbdev"
00:01:47.927  	gpu/*:	missing internal dependency, "gpudev"
00:01:47.927  	
00:01:47.927  
00:01:47.927  Build targets in project: 107
00:01:47.927  
00:01:47.927  DPDK 24.03.0
00:01:47.927  
00:01:47.927    User defined options
00:01:47.927      buildtype          : debug
00:01:47.927      default_library    : shared
00:01:47.927      libdir             : lib
00:01:47.927      prefix             : /var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/build
00:01:47.927      b_sanitize         : address
00:01:47.927      c_args             : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -I/var/jenkins/workspace/vfio-user-phy-autotest/spdk/intel-ipsec-mb/lib -DNO_COMPAT_IMB_API_053 -fPIC -Werror 
00:01:47.927      c_link_args        : -L/var/jenkins/workspace/vfio-user-phy-autotest/spdk/intel-ipsec-mb/lib
00:01:47.927      cpu_instruction_set: native
00:01:47.927      disable_apps       : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf
00:01:47.928      disable_libs       : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro
00:01:47.928      enable_docs        : false
00:01:47.928      enable_drivers     : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm,crypto/qat,compress/qat,common/qat,common/mlx5,bus/auxiliary,crypto,crypto/aesni_mb,crypto/mlx5,crypto/ipsec_mb
00:01:47.928      enable_kmods       : false
00:01:47.928      max_lcores         : 128
00:01:47.928      tests              : false
00:01:47.928  
00:01:47.928  Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja
00:01:47.928  ninja: Entering directory `/var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/build-tmp'
00:01:47.928  [1/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o
00:01:47.928  [2/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o
00:01:47.928  [3/363] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o
00:01:47.928  [4/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o
00:01:47.928  [5/363] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o
00:01:47.928  [6/363] Compiling C object lib/librte_log.a.p/log_log_linux.c.o
00:01:47.928  [7/363] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o
00:01:47.928  [8/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o
00:01:47.928  [9/363] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o
00:01:47.928  [10/363] Linking static target lib/librte_kvargs.a
00:01:47.928  [11/363] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o
00:01:47.928  [12/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o
00:01:47.928  [13/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o
00:01:47.928  [14/363] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o
00:01:47.928  [15/363] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o
00:01:47.928  [16/363] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o
00:01:47.928  [17/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o
00:01:47.928  [18/363] Compiling C object lib/librte_log.a.p/log_log.c.o
00:01:47.928  [19/363] Linking static target lib/librte_log.a
00:01:47.928  [20/363] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output)
00:01:47.928  [21/363] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o
00:01:47.928  [22/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o
00:01:47.928  [23/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o
00:01:47.928  [24/363] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o
00:01:47.928  [25/363] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o
00:01:47.928  [26/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o
00:01:47.928  [27/363] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o
00:01:47.928  [28/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o
00:01:47.928  [29/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o
00:01:47.928  [30/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o
00:01:47.928  [31/363] Linking static target lib/net/libnet_crc_avx512_lib.a
00:01:47.928  [32/363] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o
00:01:47.928  [33/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o
00:01:47.928  [34/363] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o
00:01:47.928  [35/363] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o
00:01:47.928  [36/363] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o
00:01:47.928  [37/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o
00:01:47.928  [38/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o
00:01:47.928  [39/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o
00:01:47.928  [40/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o
00:01:47.928  [41/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o
00:01:47.928  [42/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o
00:01:47.928  [43/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o
00:01:47.928  [44/363] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o
00:01:47.928  [45/363] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o
00:01:47.928  [46/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o
00:01:47.928  [47/363] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o
00:01:47.928  [48/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o
00:01:47.928  [49/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o
00:01:47.928  [50/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o
00:01:47.928  [51/363] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o
00:01:47.928  [52/363] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o
00:01:47.928  [53/363] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o
00:01:47.928  [54/363] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o
00:01:47.928  [55/363] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o
00:01:47.928  [56/363] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o
00:01:47.928  [57/363] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o
00:01:47.928  [58/363] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o
00:01:47.928  [59/363] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o
00:01:47.928  [60/363] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o
00:01:47.928  [61/363] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o
00:01:47.928  [62/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o
00:01:47.928  [63/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o
00:01:47.928  [64/363] Linking static target lib/librte_meter.a
00:01:47.928  [65/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o
00:01:47.928  [66/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o
00:01:47.928  [67/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o
00:01:47.928  [68/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o
00:01:47.928  [69/363] Linking static target lib/librte_pci.a
00:01:47.928  [70/363] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o
00:01:47.928  [71/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o
00:01:47.928  [72/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o
00:01:47.928  [73/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o
00:01:47.928  [74/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o
00:01:47.928  [75/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o
00:01:47.928  [76/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o
00:01:47.928  [77/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o
00:01:47.928  [78/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o
00:01:47.928  [79/363] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o
00:01:47.928  [80/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o
00:01:47.928  [81/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o
00:01:47.928  [82/363] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o
00:01:47.928  [83/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o
00:01:47.928  [84/363] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o
00:01:47.928  [85/363] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o
00:01:47.928  [86/363] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o
00:01:47.928  [87/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o
00:01:47.928  [88/363] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o
00:01:47.928  [89/363] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o
00:01:47.928  [90/363] Linking static target lib/librte_telemetry.a
00:01:47.928  [91/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o
00:01:47.928  [92/363] Linking static target lib/librte_ring.a
00:01:47.928  [93/363] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o
00:01:47.928  [94/363] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o
00:01:47.928  [95/363] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o
00:01:47.928  [96/363] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o
00:01:47.928  [97/363] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o
00:01:47.928  [98/363] Compiling C object lib/librte_net.a.p/net_rte_net.c.o
00:01:47.928  [99/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o
00:01:47.928  [100/363] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o
00:01:47.928  [101/363] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o
00:01:47.928  [102/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o
00:01:47.928  [103/363] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o
00:01:47.928  [104/363] Compiling C object lib/librte_power.a.p/power_power_common.c.o
00:01:47.928  [105/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o
00:01:47.928  [106/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o
00:01:47.928  [107/363] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_params.c.o
00:01:47.928  [108/363] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o
00:01:47.928  [109/363] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o
00:01:47.928  [110/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o
00:01:47.928  [111/363] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o
00:01:47.928  [112/363] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o
00:01:47.928  [113/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o
00:01:47.928  [114/363] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o
00:01:47.928  [115/363] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o
00:01:47.928  [116/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o
00:01:47.928  [117/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o
00:01:47.928  [118/363] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o
00:01:47.928  [119/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_logs.c.o
00:01:47.928  [120/363] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o
00:01:47.928  [121/363] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o
00:01:47.928  [122/363] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o
00:01:47.928  [123/363] Linking static target lib/librte_mempool.a
00:01:47.928  [124/363] Linking static target lib/librte_net.a
00:01:47.928  [125/363] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output)
00:01:48.192  [126/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o
00:01:48.192  [127/363] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o
00:01:48.192  [128/363] Linking static target lib/librte_eal.a
00:01:48.192  [129/363] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o
00:01:48.192  [130/363] Linking static target lib/librte_rcu.a
00:01:48.192  [131/363] Linking target lib/librte_log.so.24.1
00:01:48.192  [132/363] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output)
00:01:48.192  [133/363] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output)
00:01:48.192  [134/363] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o
00:01:48.192  [135/363] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output)
00:01:48.192  [136/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_glue.c.o
00:01:48.454  [137/363] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o
00:01:48.454  [138/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o
00:01:48.454  [139/363] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols
00:01:48.454  [140/363] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o
00:01:48.454  [141/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o
00:01:48.454  [142/363] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o
00:01:48.454  [143/363] Linking static target lib/librte_cmdline.a
00:01:48.455  [144/363] Linking target lib/librte_kvargs.so.24.1
00:01:48.455  [145/363] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o
00:01:48.455  [146/363] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o
00:01:48.455  [147/363] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o
00:01:48.455  [148/363] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o
00:01:48.455  [149/363] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_linux_auxiliary.c.o
00:01:48.455  [150/363] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o
00:01:48.455  [151/363] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o
00:01:48.455  [152/363] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output)
00:01:48.455  [153/363] Compiling C object lib/librte_power.a.p/power_rte_power.c.o
00:01:48.455  [154/363] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o
00:01:48.455  [155/363] Linking static target lib/librte_timer.a
00:01:48.455  [156/363] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o
00:01:48.455  [157/363] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o
00:01:48.455  [158/363] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output)
00:01:48.455  [159/363] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o
00:01:48.713  [160/363] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o
00:01:48.713  [161/363] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o
00:01:48.713  [162/363] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o
00:01:48.714  [163/363] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o
00:01:48.714  [164/363] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_common.c.o
00:01:48.714  [165/363] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols
00:01:48.714  [166/363] Linking static target drivers/libtmp_rte_bus_auxiliary.a
00:01:48.714  [167/363] Linking target lib/librte_telemetry.so.24.1
00:01:48.714  [168/363] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o
00:01:48.714  [169/363] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o
00:01:48.714  [170/363] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o
00:01:48.714  [171/363] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o
00:01:48.714  [172/363] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o
00:01:48.714  [173/363] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o
00:01:48.714  [174/363] Linking static target drivers/libtmp_rte_bus_vdev.a
00:01:48.714  [175/363] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o
00:01:48.714  [176/363] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o
00:01:48.714  [177/363] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output)
00:01:48.714  [178/363] Linking static target lib/librte_dmadev.a
00:01:48.714  [179/363] Linking static target lib/librte_power.a
00:01:48.714  [180/363] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o
00:01:48.714  [181/363] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o
00:01:48.714  [182/363] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o
00:01:48.714  [183/363] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o
00:01:48.714  [184/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen5.c.o
00:01:48.714  [185/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen2.c.o
00:01:48.714  [186/363] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o
00:01:48.714  [187/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_common.c.o
00:01:48.714  [188/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_pf2vf.c.o
00:01:48.714  [189/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen3.c.o
00:01:48.714  [190/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen1.c.o
00:01:48.714  [191/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_verbs.c.o
00:01:48.714  [192/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_malloc.c.o
00:01:48.714  [193/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_pci.c.o
00:01:48.714  [194/363] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o
00:01:48.714  [195/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mp.c.o
00:01:48.714  [196/363] Linking static target lib/librte_compressdev.a
00:01:48.972  [197/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen1.c.o
00:01:48.972  [198/363] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols
00:01:48.972  [199/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen2.c.o
00:01:48.972  [200/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen_lce.c.o
00:01:48.972  [201/363] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o
00:01:48.972  [202/363] Linking static target drivers/libtmp_rte_bus_pci.a
00:01:48.972  [203/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen3.c.o
00:01:48.972  [204/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_devx.c.o
00:01:48.972  [205/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_auxiliary.c.o
00:01:48.972  [206/363] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o
00:01:48.972  [207/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp_pmd.c.o
00:01:48.972  [208/363] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o
00:01:48.972  [209/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen4.c.o
00:01:48.972  [210/363] Linking static target lib/librte_mbuf.a
00:01:48.972  [211/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_utils.c.o
00:01:48.972  [212/363] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o
00:01:48.972  [213/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_device.c.o
00:01:48.972  [214/363] Linking static target lib/librte_reorder.a
00:01:48.972  [215/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen5.c.o
00:01:48.972  [216/363] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o
00:01:48.972  [217/363] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output)
00:01:48.972  [218/363] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output)
00:01:48.972  [219/363] Generating drivers/rte_bus_auxiliary.pmd.c with a custom command
00:01:48.972  [220/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_os.c.o
00:01:48.972  [221/363] Compiling C object drivers/librte_bus_auxiliary.a.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o
00:01:48.972  [222/363] Compiling C object drivers/librte_bus_auxiliary.so.24.1.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o
00:01:48.972  [223/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen4.c.o
00:01:48.972  [224/363] Linking static target drivers/librte_bus_auxiliary.a
00:01:48.972  [225/363] Generating drivers/rte_bus_vdev.pmd.c with a custom command
00:01:48.972  [226/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common.c.o
00:01:48.972  [227/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_nl.c.o
00:01:48.972  [228/363] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:01:48.972  [229/363] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:01:48.972  [230/363] Linking static target drivers/librte_bus_vdev.a
00:01:49.231  [231/363] Generating drivers/rte_bus_pci.pmd.c with a custom command
00:01:49.231  [232/363] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:01:49.231  [233/363] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:01:49.231  [234/363] Linking static target drivers/librte_bus_pci.a
00:01:49.231  [235/363] Compiling C object lib/librte_security.a.p/security_rte_security.c.o
00:01:49.231  [236/363] Linking static target lib/librte_security.a
00:01:49.231  [237/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen5.c.o
00:01:49.231  [238/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym.c.o
00:01:49.231  [239/363] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output)
00:01:49.231  [240/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_crypto.c.o
00:01:49.231  [241/363] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output)
00:01:49.231  [242/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_qp.c.o
00:01:49.231  [243/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_ops.c.o
00:01:49.231  [244/363] Generating drivers/rte_bus_auxiliary.sym_chk with a custom command (wrapped by meson to capture output)
00:01:49.231  [245/363] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o
00:01:49.231  [246/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_asym_pmd_gen1.c.o
00:01:49.231  [247/363] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output)
00:01:49.231  [248/363] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output)
00:01:49.231  [249/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen_lce.c.o
00:01:49.231  [250/363] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output)
00:01:49.231  [251/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen2.c.o
00:01:49.231  [252/363] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output)
00:01:49.231  [253/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mr.c.o
00:01:49.489  [254/363] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output)
00:01:49.489  [255/363] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_dek.c.o
00:01:49.489  [256/363] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o
00:01:49.489  [257/363] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o
00:01:49.489  [258/363] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o
00:01:49.489  [259/363] Linking static target drivers/libtmp_rte_mempool_ring.a
00:01:49.489  [260/363] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output)
00:01:49.489  [261/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_private.c.o
00:01:49.489  [262/363] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output)
00:01:49.489  [263/363] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto.c.o
00:01:49.489  [264/363] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_gcm.c.o
00:01:49.489  [265/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp.c.o
00:01:49.489  [266/363] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_xts.c.o
00:01:49.747  [267/363] Linking static target drivers/libtmp_rte_crypto_mlx5.a
00:01:49.747  [268/363] Generating drivers/rte_mempool_ring.pmd.c with a custom command
00:01:49.747  [269/363] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:01:49.747  [270/363] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:01:49.747  [271/363] Linking static target drivers/librte_mempool_ring.a
00:01:49.747  [272/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_chacha_poly.c.o
00:01:49.747  [273/363] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o
00:01:49.747  [274/363] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o
00:01:49.747  [275/363] Linking static target lib/librte_cryptodev.a
00:01:49.747  [276/363] Linking static target lib/librte_hash.a
00:01:49.747  [277/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_gcm.c.o
00:01:49.747  [278/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_zuc.c.o
00:01:49.747  [279/363] Generating drivers/rte_crypto_mlx5.pmd.c with a custom command
00:01:49.747  [280/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_kasumi.c.o
00:01:49.747  [281/363] Compiling C object drivers/librte_crypto_mlx5.a.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o
00:01:49.748  [282/363] Compiling C object drivers/librte_crypto_mlx5.so.24.1.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o
00:01:49.748  [283/363] Linking static target drivers/librte_crypto_mlx5.a
00:01:50.005  [284/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_devx_cmds.c.o
00:01:50.005  [285/363] Linking static target drivers/libtmp_rte_common_mlx5.a
00:01:50.005  [286/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_snow3g.c.o
00:01:50.005  [287/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym_session.c.o
00:01:50.005  [288/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen4.c.o
00:01:50.005  [289/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_mb.c.o
00:01:50.005  [290/363] Linking static target drivers/libtmp_rte_crypto_ipsec_mb.a
00:01:50.264  [291/363] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output)
00:01:50.264  [292/363] Generating drivers/rte_common_mlx5.pmd.c with a custom command
00:01:50.264  [293/363] Compiling C object drivers/librte_common_mlx5.a.p/meson-generated_.._rte_common_mlx5.pmd.c.o
00:01:50.264  [294/363] Compiling C object drivers/librte_common_mlx5.so.24.1.p/meson-generated_.._rte_common_mlx5.pmd.c.o
00:01:50.264  [295/363] Linking static target drivers/librte_common_mlx5.a
00:01:50.264  [296/363] Generating drivers/rte_crypto_ipsec_mb.pmd.c with a custom command
00:01:50.521  [297/363] Compiling C object drivers/librte_crypto_ipsec_mb.a.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o
00:01:50.521  [298/363] Compiling C object drivers/librte_crypto_ipsec_mb.so.24.1.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o
00:01:50.521  [299/363] Linking static target drivers/librte_crypto_ipsec_mb.a
00:01:50.521  [300/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_sym_pmd_gen1.c.o
00:01:50.521  [301/363] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o
00:01:50.521  [302/363] Linking static target lib/librte_ethdev.a
00:01:50.779  [303/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen3.c.o
00:01:50.779  [304/363] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output)
00:01:51.713  [305/363] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o
00:01:53.086  [306/363] Generating drivers/rte_common_mlx5.sym_chk with a custom command (wrapped by meson to capture output)
00:01:53.345  [307/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_asym.c.o
00:01:53.345  [308/363] Linking static target drivers/libtmp_rte_common_qat.a
00:01:53.602  [309/363] Generating drivers/rte_common_qat.pmd.c with a custom command
00:01:53.603  [310/363] Compiling C object drivers/librte_common_qat.so.24.1.p/meson-generated_.._rte_common_qat.pmd.c.o
00:01:53.603  [311/363] Compiling C object drivers/librte_common_qat.a.p/meson-generated_.._rte_common_qat.pmd.c.o
00:01:53.603  [312/363] Linking static target drivers/librte_common_qat.a
00:01:53.861  [313/363] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output)
00:01:53.861  [314/363] Linking target lib/librte_eal.so.24.1
00:01:53.861  [315/363] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols
00:01:53.861  [316/363] Linking target lib/librte_ring.so.24.1
00:01:53.861  [317/363] Linking target lib/librte_meter.so.24.1
00:01:53.861  [318/363] Linking target lib/librte_pci.so.24.1
00:01:53.861  [319/363] Linking target drivers/librte_bus_vdev.so.24.1
00:01:53.861  [320/363] Linking target drivers/librte_bus_auxiliary.so.24.1
00:01:53.861  [321/363] Linking target lib/librte_dmadev.so.24.1
00:01:53.861  [322/363] Linking target lib/librte_timer.so.24.1
00:01:54.120  [323/363] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols
00:01:54.120  [324/363] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols
00:01:54.120  [325/363] Generating symbol file drivers/librte_bus_auxiliary.so.24.1.p/librte_bus_auxiliary.so.24.1.symbols
00:01:54.120  [326/363] Generating symbol file drivers/librte_bus_vdev.so.24.1.p/librte_bus_vdev.so.24.1.symbols
00:01:54.120  [327/363] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols
00:01:54.120  [328/363] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols
00:01:54.120  [329/363] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols
00:01:54.120  [330/363] Linking target lib/librte_mempool.so.24.1
00:01:54.120  [331/363] Linking target lib/librte_rcu.so.24.1
00:01:54.120  [332/363] Linking target drivers/librte_bus_pci.so.24.1
00:01:54.379  [333/363] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols
00:01:54.379  [334/363] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols
00:01:54.379  [335/363] Generating symbol file drivers/librte_bus_pci.so.24.1.p/librte_bus_pci.so.24.1.symbols
00:01:54.379  [336/363] Linking target drivers/librte_mempool_ring.so.24.1
00:01:54.379  [337/363] Linking target lib/librte_mbuf.so.24.1
00:01:54.379  [338/363] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols
00:01:54.379  [339/363] Linking target lib/librte_net.so.24.1
00:01:54.379  [340/363] Linking target lib/librte_compressdev.so.24.1
00:01:54.379  [341/363] Linking target lib/librte_cryptodev.so.24.1
00:01:54.379  [342/363] Linking target lib/librte_reorder.so.24.1
00:01:54.640  [343/363] Generating symbol file lib/librte_compressdev.so.24.1.p/librte_compressdev.so.24.1.symbols
00:01:54.640  [344/363] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols
00:01:54.640  [345/363] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols
00:01:54.640  [346/363] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output)
00:01:54.640  [347/363] Linking target lib/librte_security.so.24.1
00:01:54.640  [348/363] Linking target lib/librte_cmdline.so.24.1
00:01:54.640  [349/363] Linking target lib/librte_hash.so.24.1
00:01:54.640  [350/363] Linking target lib/librte_ethdev.so.24.1
00:01:54.640  [351/363] Generating symbol file lib/librte_security.so.24.1.p/librte_security.so.24.1.symbols
00:01:54.640  [352/363] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols
00:01:54.899  [353/363] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols
00:01:54.899  [354/363] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o
00:01:54.899  [355/363] Linking static target lib/librte_vhost.a
00:01:54.899  [356/363] Linking target drivers/librte_common_mlx5.so.24.1
00:01:54.899  [357/363] Linking target lib/librte_power.so.24.1
00:01:54.899  [358/363] Generating symbol file drivers/librte_common_mlx5.so.24.1.p/librte_common_mlx5.so.24.1.symbols
00:01:54.899  [359/363] Linking target drivers/librte_crypto_mlx5.so.24.1
00:01:54.899  [360/363] Linking target drivers/librte_crypto_ipsec_mb.so.24.1
00:01:54.899  [361/363] Linking target drivers/librte_common_qat.so.24.1
00:01:55.836  [362/363] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output)
00:01:55.836  [363/363] Linking target lib/librte_vhost.so.24.1
00:01:55.836  INFO: autodetecting backend as ninja
00:01:55.836  INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/build-tmp -j 88
00:01:56.770    CC lib/ut_mock/mock.o
00:01:56.770    CC lib/ut/ut.o
00:01:56.770    CC lib/log/log.o
00:01:56.770    CC lib/log/log_flags.o
00:01:56.770    CC lib/log/log_deprecated.o
00:01:56.770    LIB libspdk_ut_mock.a
00:01:56.770    LIB libspdk_ut.a
00:01:56.770    SO libspdk_ut_mock.so.6.0
00:01:56.770    LIB libspdk_log.a
00:01:56.770    SO libspdk_ut.so.2.0
00:01:56.770    SO libspdk_log.so.7.1
00:01:56.771    SYMLINK libspdk_ut_mock.so
00:01:56.771    SYMLINK libspdk_ut.so
00:01:56.771    SYMLINK libspdk_log.so
00:01:57.029    CC lib/ioat/ioat.o
00:01:57.029    CC lib/dma/dma.o
00:01:57.029    CC lib/util/base64.o
00:01:57.029    CC lib/util/bit_array.o
00:01:57.029    CC lib/util/crc16.o
00:01:57.029    CC lib/util/cpuset.o
00:01:57.029    CC lib/util/crc32.o
00:01:57.029    CC lib/util/crc32c.o
00:01:57.029    CC lib/util/crc32_ieee.o
00:01:57.029    CC lib/util/crc64.o
00:01:57.029    CC lib/util/dif.o
00:01:57.029    CC lib/util/fd.o
00:01:57.029    CC lib/util/fd_group.o
00:01:57.029    CC lib/util/file.o
00:01:57.029    CC lib/util/hexlify.o
00:01:57.029    CC lib/util/iov.o
00:01:57.029    CC lib/util/math.o
00:01:57.029    CC lib/util/net.o
00:01:57.029    CC lib/util/pipe.o
00:01:57.029    CC lib/util/strerror_tls.o
00:01:57.029    CXX lib/trace_parser/trace.o
00:01:57.029    CC lib/util/string.o
00:01:57.029    CC lib/util/uuid.o
00:01:57.029    CC lib/util/xor.o
00:01:57.029    CC lib/util/zipf.o
00:01:57.029    CC lib/util/md5.o
00:01:57.029    CC lib/vfio_user/host/vfio_user_pci.o
00:01:57.029    CC lib/vfio_user/host/vfio_user.o
00:01:57.288    LIB libspdk_dma.a
00:01:57.288    SO libspdk_dma.so.5.0
00:01:57.288    SYMLINK libspdk_dma.so
00:01:57.288    LIB libspdk_ioat.a
00:01:57.288    SO libspdk_ioat.so.7.0
00:01:57.288    LIB libspdk_vfio_user.a
00:01:57.546    SYMLINK libspdk_ioat.so
00:01:57.546    SO libspdk_vfio_user.so.5.0
00:01:57.546    SYMLINK libspdk_vfio_user.so
00:01:57.804    LIB libspdk_util.a
00:01:57.804    SO libspdk_util.so.10.1
00:01:57.804    SYMLINK libspdk_util.so
00:01:58.063    LIB libspdk_trace_parser.a
00:01:58.063    SO libspdk_trace_parser.so.6.0
00:01:58.063    CC lib/conf/conf.o
00:01:58.063    CC lib/rdma_utils/rdma_utils.o
00:01:58.063    CC lib/vmd/vmd.o
00:01:58.063    CC lib/vmd/led.o
00:01:58.063    CC lib/idxd/idxd.o
00:01:58.063    CC lib/idxd/idxd_user.o
00:01:58.063    CC lib/json/json_parse.o
00:01:58.063    CC lib/idxd/idxd_kernel.o
00:01:58.063    CC lib/json/json_util.o
00:01:58.063    CC lib/json/json_write.o
00:01:58.063    CC lib/env_dpdk/env.o
00:01:58.063    CC lib/env_dpdk/memory.o
00:01:58.063    CC lib/env_dpdk/pci.o
00:01:58.063    CC lib/env_dpdk/init.o
00:01:58.063    CC lib/env_dpdk/threads.o
00:01:58.063    CC lib/env_dpdk/pci_ioat.o
00:01:58.063    CC lib/env_dpdk/pci_virtio.o
00:01:58.063    CC lib/env_dpdk/pci_vmd.o
00:01:58.063    CC lib/env_dpdk/pci_idxd.o
00:01:58.063    CC lib/env_dpdk/pci_event.o
00:01:58.063    CC lib/env_dpdk/sigbus_handler.o
00:01:58.063    CC lib/env_dpdk/pci_dpdk.o
00:01:58.063    CC lib/env_dpdk/pci_dpdk_2207.o
00:01:58.063    CC lib/env_dpdk/pci_dpdk_2211.o
00:01:58.063    SYMLINK libspdk_trace_parser.so
00:01:58.321    LIB libspdk_conf.a
00:01:58.321    SO libspdk_conf.so.6.0
00:01:58.321    SYMLINK libspdk_conf.so
00:01:58.321    LIB libspdk_json.a
00:01:58.321    SO libspdk_json.so.6.0
00:01:58.321    LIB libspdk_rdma_utils.a
00:01:58.579    SO libspdk_rdma_utils.so.1.0
00:01:58.579    SYMLINK libspdk_json.so
00:01:58.579    SYMLINK libspdk_rdma_utils.so
00:01:58.579    CC lib/jsonrpc/jsonrpc_server.o
00:01:58.579    CC lib/jsonrpc/jsonrpc_server_tcp.o
00:01:58.579    CC lib/jsonrpc/jsonrpc_client.o
00:01:58.579    CC lib/jsonrpc/jsonrpc_client_tcp.o
00:01:58.579    CC lib/rdma_provider/common.o
00:01:58.579    CC lib/rdma_provider/rdma_provider_verbs.o
00:01:58.838    LIB libspdk_idxd.a
00:01:58.838    LIB libspdk_vmd.a
00:01:58.838    SO libspdk_idxd.so.12.1
00:01:58.838    SO libspdk_vmd.so.6.0
00:01:58.838    LIB libspdk_rdma_provider.a
00:01:58.838    SO libspdk_rdma_provider.so.7.0
00:01:58.838    SYMLINK libspdk_idxd.so
00:01:58.838    LIB libspdk_jsonrpc.a
00:01:58.838    SYMLINK libspdk_vmd.so
00:01:58.838    SO libspdk_jsonrpc.so.6.0
00:01:58.838    SYMLINK libspdk_rdma_provider.so
00:01:59.097    SYMLINK libspdk_jsonrpc.so
00:01:59.097    CC lib/rpc/rpc.o
00:01:59.357    LIB libspdk_rpc.a
00:01:59.357    SO libspdk_rpc.so.6.0
00:01:59.357    SYMLINK libspdk_rpc.so
00:01:59.616    CC lib/keyring/keyring.o
00:01:59.616    CC lib/keyring/keyring_rpc.o
00:01:59.616    CC lib/notify/notify.o
00:01:59.616    CC lib/notify/notify_rpc.o
00:01:59.616    CC lib/trace/trace.o
00:01:59.616    CC lib/trace/trace_flags.o
00:01:59.616    CC lib/trace/trace_rpc.o
00:01:59.616    LIB libspdk_env_dpdk.a
00:01:59.616    LIB libspdk_notify.a
00:01:59.616    SO libspdk_notify.so.6.0
00:01:59.616    SO libspdk_env_dpdk.so.15.1
00:01:59.616    SYMLINK libspdk_notify.so
00:01:59.616    LIB libspdk_keyring.a
00:01:59.875    SO libspdk_keyring.so.2.0
00:01:59.875    LIB libspdk_trace.a
00:01:59.875    SO libspdk_trace.so.11.0
00:01:59.875    SYMLINK libspdk_env_dpdk.so
00:01:59.875    SYMLINK libspdk_keyring.so
00:01:59.875    SYMLINK libspdk_trace.so
00:01:59.875    CC lib/thread/thread.o
00:01:59.875    CC lib/thread/iobuf.o
00:02:00.134    CC lib/sock/sock.o
00:02:00.134    CC lib/sock/sock_rpc.o
00:02:00.393    LIB libspdk_sock.a
00:02:00.393    SO libspdk_sock.so.10.0
00:02:00.652    SYMLINK libspdk_sock.so
00:02:00.652    CC lib/nvme/nvme_ctrlr_cmd.o
00:02:00.652    CC lib/nvme/nvme_ctrlr.o
00:02:00.652    CC lib/nvme/nvme_fabric.o
00:02:00.652    CC lib/nvme/nvme_ns_cmd.o
00:02:00.652    CC lib/nvme/nvme_ns.o
00:02:00.652    CC lib/nvme/nvme_pcie_common.o
00:02:00.652    CC lib/nvme/nvme_pcie.o
00:02:00.652    CC lib/nvme/nvme_qpair.o
00:02:00.652    CC lib/nvme/nvme.o
00:02:00.652    CC lib/nvme/nvme_quirks.o
00:02:00.652    CC lib/nvme/nvme_transport.o
00:02:00.652    CC lib/nvme/nvme_ctrlr_ocssd_cmd.o
00:02:00.652    CC lib/nvme/nvme_discovery.o
00:02:00.652    CC lib/nvme/nvme_ns_ocssd_cmd.o
00:02:00.652    CC lib/nvme/nvme_tcp.o
00:02:00.652    CC lib/nvme/nvme_opal.o
00:02:00.652    CC lib/nvme/nvme_io_msg.o
00:02:00.652    CC lib/nvme/nvme_poll_group.o
00:02:00.652    CC lib/nvme/nvme_zns.o
00:02:00.652    CC lib/nvme/nvme_stubs.o
00:02:00.652    CC lib/nvme/nvme_auth.o
00:02:00.652    CC lib/nvme/nvme_cuse.o
00:02:00.652    CC lib/nvme/nvme_rdma.o
00:02:00.652    CC lib/nvme/nvme_vfio_user.o
00:02:01.589    LIB libspdk_thread.a
00:02:01.589    SO libspdk_thread.so.11.0
00:02:01.848    SYMLINK libspdk_thread.so
00:02:01.848    CC lib/init/json_config.o
00:02:01.848    CC lib/init/subsystem.o
00:02:01.848    CC lib/fsdev/fsdev.o
00:02:01.848    CC lib/fsdev/fsdev_io.o
00:02:01.848    CC lib/blob/blobstore.o
00:02:01.848    CC lib/init/subsystem_rpc.o
00:02:01.848    CC lib/accel/accel.o
00:02:01.848    CC lib/fsdev/fsdev_rpc.o
00:02:01.848    CC lib/blob/request.o
00:02:01.848    CC lib/init/rpc.o
00:02:01.848    CC lib/accel/accel_rpc.o
00:02:01.848    CC lib/blob/zeroes.o
00:02:01.848    CC lib/accel/accel_sw.o
00:02:01.848    CC lib/blob/blob_bs_dev.o
00:02:01.848    CC lib/vfu_tgt/tgt_endpoint.o
00:02:01.848    CC lib/vfu_tgt/tgt_rpc.o
00:02:01.848    CC lib/virtio/virtio.o
00:02:01.848    CC lib/virtio/virtio_vhost_user.o
00:02:01.848    CC lib/virtio/virtio_vfio_user.o
00:02:01.848    CC lib/virtio/virtio_pci.o
00:02:02.107    LIB libspdk_init.a
00:02:02.107    SO libspdk_init.so.6.0
00:02:02.365    SYMLINK libspdk_init.so
00:02:02.365    LIB libspdk_vfu_tgt.a
00:02:02.365    LIB libspdk_virtio.a
00:02:02.365    SO libspdk_vfu_tgt.so.3.0
00:02:02.365    SO libspdk_virtio.so.7.0
00:02:02.365    SYMLINK libspdk_vfu_tgt.so
00:02:02.365    SYMLINK libspdk_virtio.so
00:02:02.365    CC lib/event/app.o
00:02:02.365    CC lib/event/reactor.o
00:02:02.365    CC lib/event/log_rpc.o
00:02:02.365    CC lib/event/app_rpc.o
00:02:02.365    CC lib/event/scheduler_static.o
00:02:02.624    LIB libspdk_fsdev.a
00:02:02.624    SO libspdk_fsdev.so.2.0
00:02:02.624    SYMLINK libspdk_fsdev.so
00:02:02.883    CC lib/fuse_dispatcher/fuse_dispatcher.o
00:02:02.883    LIB libspdk_event.a
00:02:02.883    SO libspdk_event.so.14.0
00:02:03.141    SYMLINK libspdk_event.so
00:02:03.141    LIB libspdk_accel.a
00:02:03.141    SO libspdk_accel.so.16.0
00:02:03.141    LIB libspdk_nvme.a
00:02:03.141    SYMLINK libspdk_accel.so
00:02:03.400    SO libspdk_nvme.so.15.0
00:02:03.400    CC lib/bdev/bdev.o
00:02:03.400    CC lib/bdev/bdev_rpc.o
00:02:03.400    CC lib/bdev/bdev_zone.o
00:02:03.400    CC lib/bdev/part.o
00:02:03.400    CC lib/bdev/scsi_nvme.o
00:02:03.659    SYMLINK libspdk_nvme.so
00:02:03.918    LIB libspdk_fuse_dispatcher.a
00:02:03.918    SO libspdk_fuse_dispatcher.so.1.0
00:02:03.918    SYMLINK libspdk_fuse_dispatcher.so
00:02:05.297    LIB libspdk_blob.a
00:02:05.297    SO libspdk_blob.so.12.0
00:02:05.297    SYMLINK libspdk_blob.so
00:02:05.556    CC lib/blobfs/blobfs.o
00:02:05.556    CC lib/blobfs/tree.o
00:02:05.556    CC lib/lvol/lvol.o
00:02:06.124    LIB libspdk_bdev.a
00:02:06.124    SO libspdk_bdev.so.17.0
00:02:06.389    SYMLINK libspdk_bdev.so
00:02:06.389    LIB libspdk_blobfs.a
00:02:06.389    SO libspdk_blobfs.so.11.0
00:02:06.389    LIB libspdk_lvol.a
00:02:06.389    CC lib/nbd/nbd.o
00:02:06.389    CC lib/nbd/nbd_rpc.o
00:02:06.389    CC lib/ublk/ublk.o
00:02:06.389    CC lib/ublk/ublk_rpc.o
00:02:06.389    CC lib/ftl/ftl_core.o
00:02:06.389    CC lib/scsi/dev.o
00:02:06.389    CC lib/ftl/ftl_init.o
00:02:06.389    CC lib/scsi/lun.o
00:02:06.389    CC lib/ftl/ftl_layout.o
00:02:06.389    CC lib/scsi/port.o
00:02:06.389    CC lib/ftl/ftl_debug.o
00:02:06.389    CC lib/ftl/ftl_io.o
00:02:06.389    CC lib/scsi/scsi.o
00:02:06.389    CC lib/ftl/ftl_sb.o
00:02:06.389    CC lib/scsi/scsi_bdev.o
00:02:06.389    CC lib/ftl/ftl_l2p.o
00:02:06.389    CC lib/scsi/scsi_pr.o
00:02:06.389    CC lib/nvmf/ctrlr.o
00:02:06.389    CC lib/scsi/scsi_rpc.o
00:02:06.389    CC lib/ftl/ftl_l2p_flat.o
00:02:06.389    CC lib/scsi/task.o
00:02:06.389    CC lib/nvmf/ctrlr_discovery.o
00:02:06.389    CC lib/nvmf/ctrlr_bdev.o
00:02:06.389    CC lib/ftl/ftl_nv_cache.o
00:02:06.389    CC lib/nvmf/subsystem.o
00:02:06.389    CC lib/ftl/ftl_band.o
00:02:06.389    SYMLINK libspdk_blobfs.so
00:02:06.389    CC lib/nvmf/nvmf.o
00:02:06.389    CC lib/nvmf/nvmf_rpc.o
00:02:06.389    CC lib/ftl/ftl_band_ops.o
00:02:06.389    CC lib/ftl/ftl_writer.o
00:02:06.389    CC lib/nvmf/transport.o
00:02:06.389    CC lib/ftl/ftl_rq.o
00:02:06.389    CC lib/nvmf/tcp.o
00:02:06.389    CC lib/ftl/ftl_reloc.o
00:02:06.389    CC lib/nvmf/stubs.o
00:02:06.389    CC lib/ftl/ftl_l2p_cache.o
00:02:06.389    CC lib/nvmf/mdns_server.o
00:02:06.389    CC lib/nvmf/vfio_user.o
00:02:06.389    CC lib/ftl/ftl_p2l_log.o
00:02:06.389    CC lib/ftl/ftl_p2l.o
00:02:06.389    CC lib/nvmf/rdma.o
00:02:06.389    CC lib/ftl/mngt/ftl_mngt.o
00:02:06.389    CC lib/nvmf/auth.o
00:02:06.389    CC lib/ftl/mngt/ftl_mngt_bdev.o
00:02:06.389    SO libspdk_lvol.so.11.0
00:02:06.390    CC lib/ftl/mngt/ftl_mngt_shutdown.o
00:02:06.390    CC lib/ftl/mngt/ftl_mngt_md.o
00:02:06.390    CC lib/ftl/mngt/ftl_mngt_startup.o
00:02:06.390    CC lib/ftl/mngt/ftl_mngt_misc.o
00:02:06.390    CC lib/ftl/mngt/ftl_mngt_ioch.o
00:02:06.390    CC lib/ftl/mngt/ftl_mngt_l2p.o
00:02:06.390    CC lib/ftl/mngt/ftl_mngt_band.o
00:02:06.390    CC lib/ftl/mngt/ftl_mngt_self_test.o
00:02:06.390    CC lib/ftl/mngt/ftl_mngt_p2l.o
00:02:06.390    CC lib/ftl/mngt/ftl_mngt_upgrade.o
00:02:06.390    CC lib/ftl/mngt/ftl_mngt_recovery.o
00:02:06.390    CC lib/ftl/utils/ftl_conf.o
00:02:06.390    CC lib/ftl/utils/ftl_md.o
00:02:06.390    CC lib/ftl/utils/ftl_mempool.o
00:02:06.390    CC lib/ftl/utils/ftl_bitmap.o
00:02:06.390    CC lib/ftl/utils/ftl_property.o
00:02:06.390    CC lib/ftl/utils/ftl_layout_tracker_bdev.o
00:02:06.390    CC lib/ftl/upgrade/ftl_layout_upgrade.o
00:02:06.390    CC lib/ftl/upgrade/ftl_sb_upgrade.o
00:02:06.390    CC lib/ftl/upgrade/ftl_p2l_upgrade.o
00:02:06.390    CC lib/ftl/upgrade/ftl_band_upgrade.o
00:02:06.390    CC lib/ftl/upgrade/ftl_chunk_upgrade.o
00:02:06.390    CC lib/ftl/upgrade/ftl_trim_upgrade.o
00:02:06.390    CC lib/ftl/upgrade/ftl_sb_v3.o
00:02:06.390    CC lib/ftl/upgrade/ftl_sb_v5.o
00:02:06.390    CC lib/ftl/nvc/ftl_nvc_dev.o
00:02:06.390    CC lib/ftl/nvc/ftl_nvc_bdev_vss.o
00:02:06.390    CC lib/ftl/nvc/ftl_nvc_bdev_common.o
00:02:06.390    CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o
00:02:06.390    CC lib/ftl/base/ftl_base_bdev.o
00:02:06.390    CC lib/ftl/base/ftl_base_dev.o
00:02:06.390    CC lib/ftl/ftl_trace.o
00:02:06.648    SYMLINK libspdk_lvol.so
00:02:07.214    LIB libspdk_nbd.a
00:02:07.214    SO libspdk_nbd.so.7.0
00:02:07.214    SYMLINK libspdk_nbd.so
00:02:07.472    LIB libspdk_scsi.a
00:02:07.472    LIB libspdk_ublk.a
00:02:07.472    SO libspdk_scsi.so.9.0
00:02:07.472    SO libspdk_ublk.so.3.0
00:02:07.472    SYMLINK libspdk_ublk.so
00:02:07.472    SYMLINK libspdk_scsi.so
00:02:07.732    CC lib/vhost/vhost.o
00:02:07.732    CC lib/vhost/vhost_rpc.o
00:02:07.732    CC lib/vhost/vhost_scsi.o
00:02:07.732    CC lib/vhost/vhost_blk.o
00:02:07.732    CC lib/vhost/rte_vhost_user.o
00:02:07.732    CC lib/iscsi/conn.o
00:02:07.732    CC lib/iscsi/iscsi.o
00:02:07.732    CC lib/iscsi/init_grp.o
00:02:07.732    CC lib/iscsi/param.o
00:02:07.732    CC lib/iscsi/portal_grp.o
00:02:07.732    CC lib/iscsi/tgt_node.o
00:02:07.732    CC lib/iscsi/iscsi_subsystem.o
00:02:07.732    CC lib/iscsi/iscsi_rpc.o
00:02:07.732    CC lib/iscsi/task.o
00:02:07.991    LIB libspdk_ftl.a
00:02:08.250    SO libspdk_ftl.so.9.0
00:02:08.510    SYMLINK libspdk_ftl.so
00:02:08.768    LIB libspdk_vhost.a
00:02:08.768    SO libspdk_vhost.so.8.0
00:02:09.027    SYMLINK libspdk_vhost.so
00:02:09.285    LIB libspdk_nvmf.a
00:02:09.285    LIB libspdk_iscsi.a
00:02:09.285    SO libspdk_iscsi.so.8.0
00:02:09.285    SO libspdk_nvmf.so.20.0
00:02:09.544    SYMLINK libspdk_iscsi.so
00:02:09.544    SYMLINK libspdk_nvmf.so
00:02:09.803    CC module/vfu_device/vfu_virtio.o
00:02:09.803    CC module/vfu_device/vfu_virtio_blk.o
00:02:09.803    CC module/vfu_device/vfu_virtio_scsi.o
00:02:09.803    CC module/vfu_device/vfu_virtio_rpc.o
00:02:09.803    CC module/vfu_device/vfu_virtio_fs.o
00:02:09.803    CC module/env_dpdk/env_dpdk_rpc.o
00:02:09.803    CC module/keyring/file/keyring.o
00:02:09.803    CC module/keyring/file/keyring_rpc.o
00:02:09.803    CC module/keyring/linux/keyring.o
00:02:09.803    CC module/keyring/linux/keyring_rpc.o
00:02:09.803    CC module/accel/dsa/accel_dsa.o
00:02:09.803    CC module/accel/dsa/accel_dsa_rpc.o
00:02:09.803    CC module/accel/ioat/accel_ioat_rpc.o
00:02:09.803    CC module/accel/ioat/accel_ioat.o
00:02:09.803    CC module/scheduler/dynamic/scheduler_dynamic.o
00:02:09.803    CC module/sock/posix/posix.o
00:02:09.803    CC module/accel/error/accel_error.o
00:02:09.803    CC module/accel/error/accel_error_rpc.o
00:02:09.803    CC module/accel/iaa/accel_iaa.o
00:02:09.803    CC module/accel/iaa/accel_iaa_rpc.o
00:02:09.803    CC module/scheduler/gscheduler/gscheduler.o
00:02:09.803    CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev.o
00:02:09.803    CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev_rpc.o
00:02:09.803    CC module/fsdev/aio/fsdev_aio.o
00:02:09.803    CC module/scheduler/dpdk_governor/dpdk_governor.o
00:02:09.803    CC module/fsdev/aio/fsdev_aio_rpc.o
00:02:09.803    CC module/blob/bdev/blob_bdev.o
00:02:09.803    CC module/fsdev/aio/linux_aio_mgr.o
00:02:10.060    LIB libspdk_env_dpdk_rpc.a
00:02:10.060    SO libspdk_env_dpdk_rpc.so.6.0
00:02:10.060    SYMLINK libspdk_env_dpdk_rpc.so
00:02:10.060    LIB libspdk_keyring_file.a
00:02:10.060    LIB libspdk_keyring_linux.a
00:02:10.060    SO libspdk_keyring_file.so.2.0
00:02:10.060    SO libspdk_keyring_linux.so.1.0
00:02:10.060    LIB libspdk_scheduler_gscheduler.a
00:02:10.060    LIB libspdk_scheduler_dpdk_governor.a
00:02:10.060    LIB libspdk_accel_ioat.a
00:02:10.060    SO libspdk_scheduler_gscheduler.so.4.0
00:02:10.060    SYMLINK libspdk_keyring_file.so
00:02:10.060    SO libspdk_scheduler_dpdk_governor.so.4.0
00:02:10.060    SYMLINK libspdk_keyring_linux.so
00:02:10.060    LIB libspdk_scheduler_dynamic.a
00:02:10.060    SO libspdk_accel_ioat.so.6.0
00:02:10.060    LIB libspdk_accel_error.a
00:02:10.318    LIB libspdk_accel_iaa.a
00:02:10.318    SO libspdk_scheduler_dynamic.so.4.0
00:02:10.318    SO libspdk_accel_error.so.2.0
00:02:10.318    SO libspdk_accel_iaa.so.3.0
00:02:10.318    SYMLINK libspdk_scheduler_gscheduler.so
00:02:10.318    SYMLINK libspdk_scheduler_dpdk_governor.so
00:02:10.318    SYMLINK libspdk_accel_ioat.so
00:02:10.318    SYMLINK libspdk_scheduler_dynamic.so
00:02:10.318    SYMLINK libspdk_accel_error.so
00:02:10.318    LIB libspdk_blob_bdev.a
00:02:10.318    LIB libspdk_accel_dsa.a
00:02:10.318    SYMLINK libspdk_accel_iaa.so
00:02:10.318    SO libspdk_blob_bdev.so.12.0
00:02:10.318    SO libspdk_accel_dsa.so.5.0
00:02:10.318    SYMLINK libspdk_blob_bdev.so
00:02:10.318    SYMLINK libspdk_accel_dsa.so
00:02:10.577    CC module/bdev/malloc/bdev_malloc.o
00:02:10.577    CC module/bdev/null/bdev_null.o
00:02:10.577    CC module/bdev/delay/vbdev_delay.o
00:02:10.577    CC module/bdev/lvol/vbdev_lvol.o
00:02:10.577    CC module/bdev/null/bdev_null_rpc.o
00:02:10.577    CC module/bdev/malloc/bdev_malloc_rpc.o
00:02:10.577    CC module/bdev/delay/vbdev_delay_rpc.o
00:02:10.577    CC module/bdev/error/vbdev_error.o
00:02:10.577    CC module/bdev/lvol/vbdev_lvol_rpc.o
00:02:10.577    CC module/bdev/gpt/gpt.o
00:02:10.577    CC module/bdev/error/vbdev_error_rpc.o
00:02:10.577    CC module/bdev/gpt/vbdev_gpt.o
00:02:10.577    CC module/blobfs/bdev/blobfs_bdev.o
00:02:10.577    CC module/blobfs/bdev/blobfs_bdev_rpc.o
00:02:10.577    CC module/bdev/split/vbdev_split.o
00:02:10.577    CC module/bdev/passthru/vbdev_passthru.o
00:02:10.577    CC module/bdev/split/vbdev_split_rpc.o
00:02:10.577    CC module/bdev/ftl/bdev_ftl.o
00:02:10.577    CC module/bdev/passthru/vbdev_passthru_rpc.o
00:02:10.577    CC module/bdev/ftl/bdev_ftl_rpc.o
00:02:10.577    CC module/bdev/nvme/bdev_nvme.o
00:02:10.577    CC module/bdev/nvme/bdev_nvme_rpc.o
00:02:10.577    CC module/bdev/raid/bdev_raid.o
00:02:10.577    CC module/bdev/zone_block/vbdev_zone_block.o
00:02:10.577    CC module/bdev/raid/bdev_raid_rpc.o
00:02:10.577    CC module/bdev/nvme/nvme_rpc.o
00:02:10.577    CC module/bdev/zone_block/vbdev_zone_block_rpc.o
00:02:10.577    CC module/bdev/nvme/bdev_mdns_client.o
00:02:10.577    CC module/bdev/raid/bdev_raid_sb.o
00:02:10.577    CC module/bdev/crypto/vbdev_crypto.o
00:02:10.577    CC module/bdev/nvme/vbdev_opal.o
00:02:10.577    CC module/bdev/crypto/vbdev_crypto_rpc.o
00:02:10.577    CC module/bdev/raid/raid0.o
00:02:10.577    CC module/bdev/nvme/vbdev_opal_rpc.o
00:02:10.577    CC module/bdev/raid/raid1.o
00:02:10.577    CC module/bdev/virtio/bdev_virtio_scsi.o
00:02:10.577    CC module/bdev/aio/bdev_aio.o
00:02:10.577    CC module/bdev/virtio/bdev_virtio_blk.o
00:02:10.577    CC module/bdev/raid/concat.o
00:02:10.577    CC module/bdev/iscsi/bdev_iscsi.o
00:02:10.577    CC module/bdev/aio/bdev_aio_rpc.o
00:02:10.577    CC module/bdev/nvme/bdev_nvme_cuse_rpc.o
00:02:10.577    CC module/bdev/virtio/bdev_virtio_rpc.o
00:02:10.577    CC module/bdev/iscsi/bdev_iscsi_rpc.o
00:02:10.577    LIB libspdk_vfu_device.a
00:02:10.577    SO libspdk_vfu_device.so.3.0
00:02:10.836    LIB libspdk_fsdev_aio.a
00:02:10.836    SO libspdk_fsdev_aio.so.1.0
00:02:10.836    SYMLINK libspdk_vfu_device.so
00:02:10.836    SYMLINK libspdk_fsdev_aio.so
00:02:10.836    LIB libspdk_blobfs_bdev.a
00:02:10.836    SO libspdk_blobfs_bdev.so.6.0
00:02:10.836    LIB libspdk_sock_posix.a
00:02:10.836    LIB libspdk_bdev_split.a
00:02:10.836    SO libspdk_sock_posix.so.6.0
00:02:10.836    SO libspdk_bdev_split.so.6.0
00:02:11.095    LIB libspdk_bdev_null.a
00:02:11.095    SYMLINK libspdk_blobfs_bdev.so
00:02:11.095    LIB libspdk_bdev_gpt.a
00:02:11.095    SO libspdk_bdev_null.so.6.0
00:02:11.095    LIB libspdk_bdev_error.a
00:02:11.095    SO libspdk_bdev_gpt.so.6.0
00:02:11.095    LIB libspdk_bdev_ftl.a
00:02:11.095    SYMLINK libspdk_bdev_split.so
00:02:11.095    SO libspdk_bdev_error.so.6.0
00:02:11.095    SYMLINK libspdk_sock_posix.so
00:02:11.095    LIB libspdk_bdev_passthru.a
00:02:11.095    SO libspdk_bdev_ftl.so.6.0
00:02:11.095    SYMLINK libspdk_bdev_null.so
00:02:11.095    SO libspdk_bdev_passthru.so.6.0
00:02:11.095    SYMLINK libspdk_bdev_gpt.so
00:02:11.095    LIB libspdk_bdev_aio.a
00:02:11.095    SYMLINK libspdk_bdev_error.so
00:02:11.095    SYMLINK libspdk_bdev_ftl.so
00:02:11.095    SO libspdk_bdev_aio.so.6.0
00:02:11.095    LIB libspdk_bdev_malloc.a
00:02:11.095    SYMLINK libspdk_bdev_passthru.so
00:02:11.095    LIB libspdk_bdev_delay.a
00:02:11.095    SO libspdk_bdev_malloc.so.6.0
00:02:11.095    LIB libspdk_bdev_crypto.a
00:02:11.095    SO libspdk_bdev_delay.so.6.0
00:02:11.095    SYMLINK libspdk_bdev_aio.so
00:02:11.095    LIB libspdk_bdev_zone_block.a
00:02:11.095    SO libspdk_bdev_crypto.so.6.0
00:02:11.095    SO libspdk_bdev_zone_block.so.6.0
00:02:11.095    SYMLINK libspdk_bdev_malloc.so
00:02:11.095    LIB libspdk_bdev_iscsi.a
00:02:11.353    SYMLINK libspdk_bdev_delay.so
00:02:11.353    SO libspdk_bdev_iscsi.so.6.0
00:02:11.353    SYMLINK libspdk_bdev_zone_block.so
00:02:11.353    SYMLINK libspdk_bdev_crypto.so
00:02:11.353    LIB libspdk_bdev_lvol.a
00:02:11.353    SO libspdk_bdev_lvol.so.6.0
00:02:11.353    SYMLINK libspdk_bdev_iscsi.so
00:02:11.353    SYMLINK libspdk_bdev_lvol.so
00:02:11.353    LIB libspdk_bdev_virtio.a
00:02:11.353    SO libspdk_bdev_virtio.so.6.0
00:02:11.353    SYMLINK libspdk_bdev_virtio.so
00:02:11.612    LIB libspdk_accel_dpdk_cryptodev.a
00:02:11.612    SO libspdk_accel_dpdk_cryptodev.so.3.0
00:02:11.612    SYMLINK libspdk_accel_dpdk_cryptodev.so
00:02:11.870    LIB libspdk_bdev_raid.a
00:02:11.870    SO libspdk_bdev_raid.so.6.0
00:02:11.870    SYMLINK libspdk_bdev_raid.so
00:02:13.774    LIB libspdk_bdev_nvme.a
00:02:13.774    SO libspdk_bdev_nvme.so.7.1
00:02:13.774    SYMLINK libspdk_bdev_nvme.so
00:02:13.774    CC module/event/subsystems/iobuf/iobuf.o
00:02:13.774    CC module/event/subsystems/iobuf/iobuf_rpc.o
00:02:13.774    CC module/event/subsystems/scheduler/scheduler.o
00:02:13.774    CC module/event/subsystems/vmd/vmd.o
00:02:13.774    CC module/event/subsystems/fsdev/fsdev.o
00:02:13.774    CC module/event/subsystems/vmd/vmd_rpc.o
00:02:13.774    CC module/event/subsystems/vfu_tgt/vfu_tgt.o
00:02:13.774    CC module/event/subsystems/sock/sock.o
00:02:13.774    CC module/event/subsystems/keyring/keyring.o
00:02:13.774    CC module/event/subsystems/vhost_blk/vhost_blk.o
00:02:14.033    LIB libspdk_event_keyring.a
00:02:14.033    LIB libspdk_event_fsdev.a
00:02:14.033    LIB libspdk_event_scheduler.a
00:02:14.033    LIB libspdk_event_vhost_blk.a
00:02:14.033    LIB libspdk_event_sock.a
00:02:14.033    LIB libspdk_event_vfu_tgt.a
00:02:14.033    LIB libspdk_event_vmd.a
00:02:14.033    SO libspdk_event_fsdev.so.1.0
00:02:14.033    SO libspdk_event_keyring.so.1.0
00:02:14.033    SO libspdk_event_scheduler.so.4.0
00:02:14.033    SO libspdk_event_sock.so.5.0
00:02:14.033    SO libspdk_event_vhost_blk.so.3.0
00:02:14.033    SO libspdk_event_vfu_tgt.so.3.0
00:02:14.033    SO libspdk_event_vmd.so.6.0
00:02:14.033    LIB libspdk_event_iobuf.a
00:02:14.033    SYMLINK libspdk_event_fsdev.so
00:02:14.033    SYMLINK libspdk_event_keyring.so
00:02:14.033    SYMLINK libspdk_event_scheduler.so
00:02:14.033    SYMLINK libspdk_event_sock.so
00:02:14.033    SYMLINK libspdk_event_vfu_tgt.so
00:02:14.033    SYMLINK libspdk_event_vhost_blk.so
00:02:14.033    SO libspdk_event_iobuf.so.3.0
00:02:14.033    SYMLINK libspdk_event_vmd.so
00:02:14.033    SYMLINK libspdk_event_iobuf.so
00:02:14.291    CC module/event/subsystems/accel/accel.o
00:02:14.291    LIB libspdk_event_accel.a
00:02:14.291    SO libspdk_event_accel.so.6.0
00:02:14.549    SYMLINK libspdk_event_accel.so
00:02:14.549    CC module/event/subsystems/bdev/bdev.o
00:02:14.805    LIB libspdk_event_bdev.a
00:02:14.805    SO libspdk_event_bdev.so.6.0
00:02:14.805    SYMLINK libspdk_event_bdev.so
00:02:14.805    CC module/event/subsystems/ublk/ublk.o
00:02:14.805    CC module/event/subsystems/nvmf/nvmf_rpc.o
00:02:14.805    CC module/event/subsystems/nvmf/nvmf_tgt.o
00:02:14.805    CC module/event/subsystems/scsi/scsi.o
00:02:14.805    CC module/event/subsystems/nbd/nbd.o
00:02:15.063    LIB libspdk_event_ublk.a
00:02:15.063    LIB libspdk_event_nbd.a
00:02:15.063    SO libspdk_event_ublk.so.3.0
00:02:15.063    LIB libspdk_event_scsi.a
00:02:15.063    SO libspdk_event_nbd.so.6.0
00:02:15.063    SO libspdk_event_scsi.so.6.0
00:02:15.063    SYMLINK libspdk_event_ublk.so
00:02:15.063    SYMLINK libspdk_event_nbd.so
00:02:15.063    SYMLINK libspdk_event_scsi.so
00:02:15.063    LIB libspdk_event_nvmf.a
00:02:15.063    SO libspdk_event_nvmf.so.6.0
00:02:15.321    SYMLINK libspdk_event_nvmf.so
00:02:15.321    CC module/event/subsystems/iscsi/iscsi.o
00:02:15.321    CC module/event/subsystems/vhost_scsi/vhost_scsi.o
00:02:15.321    LIB libspdk_event_iscsi.a
00:02:15.321    LIB libspdk_event_vhost_scsi.a
00:02:15.321    SO libspdk_event_iscsi.so.6.0
00:02:15.321    SO libspdk_event_vhost_scsi.so.3.0
00:02:15.579    SYMLINK libspdk_event_iscsi.so
00:02:15.579    SYMLINK libspdk_event_vhost_scsi.so
00:02:15.579    SO libspdk.so.6.0
00:02:15.579    SYMLINK libspdk.so
00:02:15.847    CXX app/trace/trace.o
00:02:15.847    CC app/trace_record/trace_record.o
00:02:15.847    CC app/spdk_nvme_perf/perf.o
00:02:15.847    CC app/spdk_lspci/spdk_lspci.o
00:02:15.847    CC test/rpc_client/rpc_client_test.o
00:02:15.847    CC app/spdk_top/spdk_top.o
00:02:15.847    CC app/spdk_nvme_identify/identify.o
00:02:15.847    CC app/spdk_nvme_discover/discovery_aer.o
00:02:15.847    TEST_HEADER include/spdk/accel.h
00:02:15.847    TEST_HEADER include/spdk/accel_module.h
00:02:15.847    TEST_HEADER include/spdk/assert.h
00:02:15.847    TEST_HEADER include/spdk/barrier.h
00:02:15.847    TEST_HEADER include/spdk/base64.h
00:02:15.847    TEST_HEADER include/spdk/bdev_module.h
00:02:15.847    TEST_HEADER include/spdk/bdev.h
00:02:15.847    TEST_HEADER include/spdk/bdev_zone.h
00:02:15.847    TEST_HEADER include/spdk/bit_array.h
00:02:15.847    TEST_HEADER include/spdk/bit_pool.h
00:02:15.847    TEST_HEADER include/spdk/blob_bdev.h
00:02:15.847    TEST_HEADER include/spdk/blobfs_bdev.h
00:02:15.847    TEST_HEADER include/spdk/blobfs.h
00:02:15.847    TEST_HEADER include/spdk/conf.h
00:02:15.847    TEST_HEADER include/spdk/blob.h
00:02:15.847    TEST_HEADER include/spdk/config.h
00:02:15.847    TEST_HEADER include/spdk/cpuset.h
00:02:15.847    TEST_HEADER include/spdk/crc16.h
00:02:15.847    TEST_HEADER include/spdk/crc32.h
00:02:15.847    TEST_HEADER include/spdk/crc64.h
00:02:15.847    TEST_HEADER include/spdk/dif.h
00:02:15.847    TEST_HEADER include/spdk/dma.h
00:02:15.847    TEST_HEADER include/spdk/endian.h
00:02:15.847    TEST_HEADER include/spdk/env_dpdk.h
00:02:15.847    TEST_HEADER include/spdk/env.h
00:02:15.847    CC examples/interrupt_tgt/interrupt_tgt.o
00:02:15.847    TEST_HEADER include/spdk/fd_group.h
00:02:15.847    TEST_HEADER include/spdk/event.h
00:02:15.847    TEST_HEADER include/spdk/fd.h
00:02:15.847    TEST_HEADER include/spdk/file.h
00:02:15.847    TEST_HEADER include/spdk/fsdev.h
00:02:15.847    TEST_HEADER include/spdk/fsdev_module.h
00:02:15.847    TEST_HEADER include/spdk/ftl.h
00:02:15.847    TEST_HEADER include/spdk/gpt_spec.h
00:02:15.847    TEST_HEADER include/spdk/hexlify.h
00:02:15.847    TEST_HEADER include/spdk/histogram_data.h
00:02:15.847    TEST_HEADER include/spdk/idxd.h
00:02:15.847    TEST_HEADER include/spdk/idxd_spec.h
00:02:15.847    TEST_HEADER include/spdk/init.h
00:02:15.847    TEST_HEADER include/spdk/ioat_spec.h
00:02:15.847    TEST_HEADER include/spdk/ioat.h
00:02:15.847    TEST_HEADER include/spdk/iscsi_spec.h
00:02:15.847    TEST_HEADER include/spdk/json.h
00:02:15.847    TEST_HEADER include/spdk/jsonrpc.h
00:02:15.847    TEST_HEADER include/spdk/keyring.h
00:02:15.847    TEST_HEADER include/spdk/keyring_module.h
00:02:15.847    TEST_HEADER include/spdk/likely.h
00:02:15.847    TEST_HEADER include/spdk/log.h
00:02:15.847    TEST_HEADER include/spdk/lvol.h
00:02:15.847    TEST_HEADER include/spdk/md5.h
00:02:15.847    TEST_HEADER include/spdk/memory.h
00:02:15.847    TEST_HEADER include/spdk/mmio.h
00:02:15.847    TEST_HEADER include/spdk/nbd.h
00:02:15.847    TEST_HEADER include/spdk/net.h
00:02:15.847    TEST_HEADER include/spdk/notify.h
00:02:15.847    TEST_HEADER include/spdk/nvme.h
00:02:15.847    TEST_HEADER include/spdk/nvme_intel.h
00:02:15.847    TEST_HEADER include/spdk/nvme_ocssd.h
00:02:15.847    TEST_HEADER include/spdk/nvme_ocssd_spec.h
00:02:15.847    TEST_HEADER include/spdk/nvme_spec.h
00:02:15.847    TEST_HEADER include/spdk/nvme_zns.h
00:02:15.847    TEST_HEADER include/spdk/nvmf_cmd.h
00:02:15.847    TEST_HEADER include/spdk/nvmf_fc_spec.h
00:02:15.847    CC app/iscsi_tgt/iscsi_tgt.o
00:02:15.847    TEST_HEADER include/spdk/nvmf.h
00:02:15.847    TEST_HEADER include/spdk/nvmf_spec.h
00:02:15.847    TEST_HEADER include/spdk/nvmf_transport.h
00:02:15.847    TEST_HEADER include/spdk/opal.h
00:02:15.847    TEST_HEADER include/spdk/pci_ids.h
00:02:15.847    CC app/nvmf_tgt/nvmf_main.o
00:02:15.847    TEST_HEADER include/spdk/pipe.h
00:02:15.847    TEST_HEADER include/spdk/opal_spec.h
00:02:15.847    TEST_HEADER include/spdk/queue.h
00:02:15.847    TEST_HEADER include/spdk/reduce.h
00:02:15.847    TEST_HEADER include/spdk/rpc.h
00:02:15.847    TEST_HEADER include/spdk/scheduler.h
00:02:15.847    TEST_HEADER include/spdk/scsi.h
00:02:15.847    TEST_HEADER include/spdk/scsi_spec.h
00:02:15.847    TEST_HEADER include/spdk/sock.h
00:02:15.847    CC app/spdk_dd/spdk_dd.o
00:02:15.847    TEST_HEADER include/spdk/stdinc.h
00:02:15.847    TEST_HEADER include/spdk/string.h
00:02:15.847    TEST_HEADER include/spdk/thread.h
00:02:15.847    TEST_HEADER include/spdk/trace.h
00:02:15.847    TEST_HEADER include/spdk/trace_parser.h
00:02:15.847    TEST_HEADER include/spdk/tree.h
00:02:15.847    TEST_HEADER include/spdk/ublk.h
00:02:15.847    TEST_HEADER include/spdk/util.h
00:02:15.847    TEST_HEADER include/spdk/uuid.h
00:02:15.847    TEST_HEADER include/spdk/version.h
00:02:15.847    TEST_HEADER include/spdk/vfio_user_pci.h
00:02:15.847    TEST_HEADER include/spdk/vfio_user_spec.h
00:02:15.847    TEST_HEADER include/spdk/vhost.h
00:02:15.847    TEST_HEADER include/spdk/vmd.h
00:02:15.847    TEST_HEADER include/spdk/xor.h
00:02:15.847    TEST_HEADER include/spdk/zipf.h
00:02:15.847    CXX test/cpp_headers/accel.o
00:02:15.847    CXX test/cpp_headers/accel_module.o
00:02:15.847    CXX test/cpp_headers/assert.o
00:02:15.847    CXX test/cpp_headers/barrier.o
00:02:15.847    CXX test/cpp_headers/base64.o
00:02:15.847    CXX test/cpp_headers/bdev.o
00:02:15.847    CXX test/cpp_headers/bdev_module.o
00:02:15.847    CXX test/cpp_headers/bdev_zone.o
00:02:15.847    CXX test/cpp_headers/bit_array.o
00:02:15.847    CXX test/cpp_headers/bit_pool.o
00:02:15.847    CXX test/cpp_headers/blob_bdev.o
00:02:15.847    CXX test/cpp_headers/blobfs_bdev.o
00:02:15.847    CXX test/cpp_headers/blobfs.o
00:02:15.847    CXX test/cpp_headers/conf.o
00:02:15.847    CXX test/cpp_headers/config.o
00:02:15.847    CXX test/cpp_headers/blob.o
00:02:15.847    CXX test/cpp_headers/cpuset.o
00:02:15.847    CXX test/cpp_headers/crc16.o
00:02:15.847    CXX test/cpp_headers/crc32.o
00:02:15.847    CXX test/cpp_headers/crc64.o
00:02:15.847    CXX test/cpp_headers/dif.o
00:02:15.847    CXX test/cpp_headers/dma.o
00:02:15.847    CXX test/cpp_headers/env_dpdk.o
00:02:15.847    CXX test/cpp_headers/endian.o
00:02:15.847    CXX test/cpp_headers/env.o
00:02:15.847    CXX test/cpp_headers/event.o
00:02:15.847    CXX test/cpp_headers/fd_group.o
00:02:15.847    CC app/spdk_tgt/spdk_tgt.o
00:02:15.847    CXX test/cpp_headers/fd.o
00:02:15.847    CXX test/cpp_headers/file.o
00:02:15.847    CXX test/cpp_headers/fsdev.o
00:02:15.847    CXX test/cpp_headers/fsdev_module.o
00:02:15.847    CXX test/cpp_headers/gpt_spec.o
00:02:15.847    CXX test/cpp_headers/ftl.o
00:02:15.847    CXX test/cpp_headers/hexlify.o
00:02:15.847    CXX test/cpp_headers/histogram_data.o
00:02:15.847    CXX test/cpp_headers/idxd.o
00:02:15.847    CXX test/cpp_headers/idxd_spec.o
00:02:15.847    CXX test/cpp_headers/init.o
00:02:15.847    CXX test/cpp_headers/ioat.o
00:02:15.847    CXX test/cpp_headers/ioat_spec.o
00:02:15.847    CXX test/cpp_headers/iscsi_spec.o
00:02:15.847    CXX test/cpp_headers/json.o
00:02:15.847    CXX test/cpp_headers/keyring_module.o
00:02:15.847    CXX test/cpp_headers/jsonrpc.o
00:02:15.847    CXX test/cpp_headers/keyring.o
00:02:15.847    CXX test/cpp_headers/likely.o
00:02:15.847    CXX test/cpp_headers/log.o
00:02:15.847    CXX test/cpp_headers/lvol.o
00:02:15.847    CXX test/cpp_headers/md5.o
00:02:15.847    CXX test/cpp_headers/memory.o
00:02:15.847    CXX test/cpp_headers/mmio.o
00:02:15.847    CC examples/util/zipf/zipf.o
00:02:15.847    CXX test/cpp_headers/net.o
00:02:15.847    CXX test/cpp_headers/nbd.o
00:02:15.847    CXX test/cpp_headers/nvme.o
00:02:15.847    CXX test/cpp_headers/notify.o
00:02:15.847    CC examples/ioat/perf/perf.o
00:02:15.847    CXX test/cpp_headers/nvme_intel.o
00:02:15.847    CXX test/cpp_headers/nvme_ocssd.o
00:02:15.847    CC examples/ioat/verify/verify.o
00:02:15.847    CXX test/cpp_headers/nvme_ocssd_spec.o
00:02:15.847    CXX test/cpp_headers/nvme_spec.o
00:02:15.847    CC test/thread/poller_perf/poller_perf.o
00:02:15.847    CC test/env/pci/pci_ut.o
00:02:15.847    CC test/env/env_dpdk_post_init/env_dpdk_post_init.o
00:02:15.847    CC test/env/vtophys/vtophys.o
00:02:15.847    CC test/app/jsoncat/jsoncat.o
00:02:16.119    CC test/app/histogram_perf/histogram_perf.o
00:02:16.119    CC test/app/stub/stub.o
00:02:16.119    CC test/env/memory/memory_ut.o
00:02:16.119    CC app/fio/nvme/fio_plugin.o
00:02:16.119    CC test/dma/test_dma/test_dma.o
00:02:16.119    CC app/fio/bdev/fio_plugin.o
00:02:16.119    CC test/app/bdev_svc/bdev_svc.o
00:02:16.119    LINK spdk_lspci
00:02:16.388    LINK spdk_nvme_discover
00:02:16.388    CC test/env/mem_callbacks/mem_callbacks.o
00:02:16.388    CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o
00:02:16.388    LINK interrupt_tgt
00:02:16.388    LINK rpc_client_test
00:02:16.388    LINK nvmf_tgt
00:02:16.388    CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o
00:02:16.388    LINK iscsi_tgt
00:02:16.651    LINK spdk_trace_record
00:02:16.651    LINK env_dpdk_post_init
00:02:16.651    LINK vtophys
00:02:16.651    LINK poller_perf
00:02:16.651    LINK zipf
00:02:16.651    LINK jsoncat
00:02:16.651    CXX test/cpp_headers/nvme_zns.o
00:02:16.651    CXX test/cpp_headers/nvmf_cmd.o
00:02:16.651    LINK ioat_perf
00:02:16.651    CXX test/cpp_headers/nvmf_fc_spec.o
00:02:16.651    CXX test/cpp_headers/nvmf.o
00:02:16.651    CXX test/cpp_headers/nvmf_spec.o
00:02:16.651    CXX test/cpp_headers/nvmf_transport.o
00:02:16.651    LINK histogram_perf
00:02:16.651    CXX test/cpp_headers/opal.o
00:02:16.651    CXX test/cpp_headers/opal_spec.o
00:02:16.651    CXX test/cpp_headers/pci_ids.o
00:02:16.651    CXX test/cpp_headers/pipe.o
00:02:16.651    CXX test/cpp_headers/queue.o
00:02:16.651    CXX test/cpp_headers/reduce.o
00:02:16.651    CXX test/cpp_headers/rpc.o
00:02:16.651    CXX test/cpp_headers/scheduler.o
00:02:16.651    CXX test/cpp_headers/scsi.o
00:02:16.651    CXX test/cpp_headers/scsi_spec.o
00:02:16.651    CXX test/cpp_headers/sock.o
00:02:16.651    CXX test/cpp_headers/stdinc.o
00:02:16.651    CXX test/cpp_headers/string.o
00:02:16.651    CXX test/cpp_headers/thread.o
00:02:16.651    CXX test/cpp_headers/trace.o
00:02:16.651    LINK spdk_tgt
00:02:16.651    LINK verify
00:02:16.651    CXX test/cpp_headers/trace_parser.o
00:02:16.651    CXX test/cpp_headers/tree.o
00:02:16.651    LINK bdev_svc
00:02:16.651    CXX test/cpp_headers/ublk.o
00:02:16.651    CXX test/cpp_headers/util.o
00:02:16.651    CXX test/cpp_headers/uuid.o
00:02:16.651    CXX test/cpp_headers/version.o
00:02:16.651    CXX test/cpp_headers/vfio_user_pci.o
00:02:16.651    CXX test/cpp_headers/vfio_user_spec.o
00:02:16.651    CXX test/cpp_headers/vhost.o
00:02:16.651    CXX test/cpp_headers/xor.o
00:02:16.651    CXX test/cpp_headers/vmd.o
00:02:16.651    LINK stub
00:02:16.651    CXX test/cpp_headers/zipf.o
00:02:16.651    CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o
00:02:16.910    CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o
00:02:16.910    LINK spdk_trace
00:02:16.910    LINK spdk_dd
00:02:17.168    LINK pci_ut
00:02:17.168    LINK spdk_bdev
00:02:17.168    CC examples/sock/hello_world/hello_sock.o
00:02:17.168    CC examples/vmd/lsvmd/lsvmd.o
00:02:17.168    CC examples/vmd/led/led.o
00:02:17.168    CC examples/idxd/perf/perf.o
00:02:17.168    CC test/event/reactor/reactor.o
00:02:17.168    CC test/event/event_perf/event_perf.o
00:02:17.168    CC test/event/reactor_perf/reactor_perf.o
00:02:17.168    CC test/event/app_repeat/app_repeat.o
00:02:17.168    CC examples/thread/thread/thread_ex.o
00:02:17.168    CC test/event/scheduler/scheduler.o
00:02:17.168    LINK test_dma
00:02:17.168    LINK mem_callbacks
00:02:17.168    LINK nvme_fuzz
00:02:17.426    LINK reactor
00:02:17.426    CC app/vhost/vhost.o
00:02:17.426    LINK event_perf
00:02:17.426    LINK reactor_perf
00:02:17.426    LINK lsvmd
00:02:17.426    LINK app_repeat
00:02:17.426    LINK led
00:02:17.426    LINK spdk_nvme_identify
00:02:17.426    LINK spdk_nvme
00:02:17.426    LINK hello_sock
00:02:17.426    LINK scheduler
00:02:17.426    LINK thread
00:02:17.426    LINK spdk_nvme_perf
00:02:17.426    LINK vhost
00:02:17.426    LINK spdk_top
00:02:17.426    LINK vhost_fuzz
00:02:17.426    LINK idxd_perf
00:02:17.683    CC test/nvme/err_injection/err_injection.o
00:02:17.683    CC test/nvme/overhead/overhead.o
00:02:17.684    CC test/nvme/startup/startup.o
00:02:17.684    CC test/nvme/sgl/sgl.o
00:02:17.684    CC test/nvme/e2edp/nvme_dp.o
00:02:17.684    CC test/nvme/fused_ordering/fused_ordering.o
00:02:17.684    CC test/nvme/simple_copy/simple_copy.o
00:02:17.684    CC test/nvme/reset/reset.o
00:02:17.684    CC test/nvme/reserve/reserve.o
00:02:17.684    CC test/nvme/doorbell_aers/doorbell_aers.o
00:02:17.684    CC test/nvme/aer/aer.o
00:02:17.684    CC test/nvme/cuse/cuse.o
00:02:17.684    CC test/nvme/fdp/fdp.o
00:02:17.684    CC test/nvme/boot_partition/boot_partition.o
00:02:17.684    CC test/nvme/connect_stress/connect_stress.o
00:02:17.684    CC test/nvme/compliance/nvme_compliance.o
00:02:17.684    CC test/blobfs/mkfs/mkfs.o
00:02:17.684    CC test/accel/dif/dif.o
00:02:17.684    CC test/lvol/esnap/esnap.o
00:02:17.684    CC examples/nvme/reconnect/reconnect.o
00:02:17.684    CC examples/nvme/hotplug/hotplug.o
00:02:17.684    CC examples/nvme/arbitration/arbitration.o
00:02:17.684    CC examples/nvme/nvme_manage/nvme_manage.o
00:02:17.684    CC examples/nvme/abort/abort.o
00:02:17.684    CC examples/nvme/hello_world/hello_world.o
00:02:17.684    CC examples/nvme/cmb_copy/cmb_copy.o
00:02:17.684    CC examples/nvme/pmr_persistence/pmr_persistence.o
00:02:17.684    LINK boot_partition
00:02:17.684    LINK startup
00:02:17.942    LINK err_injection
00:02:17.942    LINK connect_stress
00:02:17.942    LINK doorbell_aers
00:02:17.942    LINK fused_ordering
00:02:17.942    LINK reserve
00:02:17.942    CC examples/accel/perf/accel_perf.o
00:02:17.942    LINK simple_copy
00:02:17.942    CC examples/fsdev/hello_world/hello_fsdev.o
00:02:17.942    CC examples/blob/cli/blobcli.o
00:02:17.942    LINK mkfs
00:02:17.942    CC examples/blob/hello_world/hello_blob.o
00:02:17.942    LINK reset
00:02:17.942    LINK sgl
00:02:17.942    LINK nvme_dp
00:02:17.942    LINK memory_ut
00:02:17.942    LINK overhead
00:02:17.942    LINK aer
00:02:17.942    LINK pmr_persistence
00:02:17.942    LINK cmb_copy
00:02:17.942    LINK fdp
00:02:17.942    LINK nvme_compliance
00:02:17.942    LINK hello_world
00:02:17.942    LINK hotplug
00:02:18.200    LINK arbitration
00:02:18.200    LINK hello_blob
00:02:18.200    LINK reconnect
00:02:18.200    LINK hello_fsdev
00:02:18.200    LINK abort
00:02:18.459    LINK nvme_manage
00:02:18.459    LINK accel_perf
00:02:18.459    LINK blobcli
00:02:18.459    LINK dif
00:02:18.718    CC examples/bdev/hello_world/hello_bdev.o
00:02:18.718    CC examples/bdev/bdevperf/bdevperf.o
00:02:18.718    LINK iscsi_fuzz
00:02:18.976    CC test/bdev/bdevio/bdevio.o
00:02:18.976    LINK hello_bdev
00:02:18.976    LINK cuse
00:02:19.234    LINK bdevio
00:02:19.801    LINK bdevperf
00:02:20.060    CC examples/nvmf/nvmf/nvmf.o
00:02:20.318    LINK nvmf
00:02:23.605    LINK esnap
00:02:23.605  
00:02:23.605  real	1m11.975s
00:02:23.605  user	18m50.117s
00:02:23.605  sys	4m15.040s
00:02:23.605   22:29:24 make -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:02:23.605   22:29:24 make -- common/autotest_common.sh@10 -- $ set +x
00:02:23.605  ************************************
00:02:23.605  END TEST make
00:02:23.605  ************************************
00:02:23.605   22:29:24  -- spdk/autobuild.sh@1 -- $ stop_monitor_resources
00:02:23.605   22:29:24  -- pm/common@29 -- $ signal_monitor_resources TERM
00:02:23.605   22:29:24  -- pm/common@40 -- $ local monitor pid pids signal=TERM
00:02:23.605   22:29:24  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:02:23.605   22:29:24  -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]]
00:02:23.605   22:29:24  -- pm/common@44 -- $ pid=4135031
00:02:23.605   22:29:24  -- pm/common@50 -- $ kill -TERM 4135031
00:02:23.605   22:29:24  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:02:23.605   22:29:24  -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/collect-vmstat.pid ]]
00:02:23.605   22:29:24  -- pm/common@44 -- $ pid=4135033
00:02:23.605   22:29:24  -- pm/common@50 -- $ kill -TERM 4135033
00:02:23.605   22:29:24  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:02:23.605   22:29:24  -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]]
00:02:23.605   22:29:24  -- pm/common@44 -- $ pid=4135034
00:02:23.605   22:29:24  -- pm/common@50 -- $ kill -TERM 4135034
00:02:23.605   22:29:24  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:02:23.605   22:29:24  -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]]
00:02:23.605   22:29:24  -- pm/common@44 -- $ pid=4135064
00:02:23.605   22:29:24  -- pm/common@50 -- $ sudo -E kill -TERM 4135064
00:02:23.605   22:29:24  -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 ))
00:02:23.605   22:29:24  -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/vfio-user-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/vfio-user-phy-autotest/autorun-spdk.conf
00:02:23.605    22:29:24  -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:02:23.605     22:29:24  -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:02:23.605     22:29:24  -- common/autotest_common.sh@1711 -- # lcov --version
00:02:23.605    22:29:24  -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:02:23.605    22:29:24  -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:02:23.605    22:29:24  -- scripts/common.sh@333 -- # local ver1 ver1_l
00:02:23.605    22:29:24  -- scripts/common.sh@334 -- # local ver2 ver2_l
00:02:23.605    22:29:24  -- scripts/common.sh@336 -- # IFS=.-:
00:02:23.605    22:29:24  -- scripts/common.sh@336 -- # read -ra ver1
00:02:23.605    22:29:24  -- scripts/common.sh@337 -- # IFS=.-:
00:02:23.605    22:29:24  -- scripts/common.sh@337 -- # read -ra ver2
00:02:23.605    22:29:24  -- scripts/common.sh@338 -- # local 'op=<'
00:02:23.605    22:29:24  -- scripts/common.sh@340 -- # ver1_l=2
00:02:23.605    22:29:24  -- scripts/common.sh@341 -- # ver2_l=1
00:02:23.605    22:29:24  -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:02:23.605    22:29:24  -- scripts/common.sh@344 -- # case "$op" in
00:02:23.605    22:29:24  -- scripts/common.sh@345 -- # : 1
00:02:23.605    22:29:24  -- scripts/common.sh@364 -- # (( v = 0 ))
00:02:23.605    22:29:24  -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:02:23.605     22:29:24  -- scripts/common.sh@365 -- # decimal 1
00:02:23.605     22:29:24  -- scripts/common.sh@353 -- # local d=1
00:02:23.605     22:29:24  -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:02:23.605     22:29:24  -- scripts/common.sh@355 -- # echo 1
00:02:23.605    22:29:24  -- scripts/common.sh@365 -- # ver1[v]=1
00:02:23.605     22:29:24  -- scripts/common.sh@366 -- # decimal 2
00:02:23.605     22:29:24  -- scripts/common.sh@353 -- # local d=2
00:02:23.605     22:29:24  -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:02:23.605     22:29:24  -- scripts/common.sh@355 -- # echo 2
00:02:23.605    22:29:24  -- scripts/common.sh@366 -- # ver2[v]=2
00:02:23.605    22:29:24  -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:02:23.606    22:29:24  -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:02:23.606    22:29:24  -- scripts/common.sh@368 -- # return 0
00:02:23.606    22:29:24  -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:02:23.606    22:29:24  -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:02:23.606  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:02:23.606  		--rc genhtml_branch_coverage=1
00:02:23.606  		--rc genhtml_function_coverage=1
00:02:23.606  		--rc genhtml_legend=1
00:02:23.606  		--rc geninfo_all_blocks=1
00:02:23.606  		--rc geninfo_unexecuted_blocks=1
00:02:23.606  		
00:02:23.606  		'
00:02:23.606    22:29:24  -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:02:23.606  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:02:23.606  		--rc genhtml_branch_coverage=1
00:02:23.606  		--rc genhtml_function_coverage=1
00:02:23.606  		--rc genhtml_legend=1
00:02:23.606  		--rc geninfo_all_blocks=1
00:02:23.606  		--rc geninfo_unexecuted_blocks=1
00:02:23.606  		
00:02:23.606  		'
00:02:23.606    22:29:24  -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:02:23.606  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:02:23.606  		--rc genhtml_branch_coverage=1
00:02:23.606  		--rc genhtml_function_coverage=1
00:02:23.606  		--rc genhtml_legend=1
00:02:23.606  		--rc geninfo_all_blocks=1
00:02:23.606  		--rc geninfo_unexecuted_blocks=1
00:02:23.606  		
00:02:23.606  		'
00:02:23.606    22:29:24  -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:02:23.606  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:02:23.606  		--rc genhtml_branch_coverage=1
00:02:23.606  		--rc genhtml_function_coverage=1
00:02:23.606  		--rc genhtml_legend=1
00:02:23.606  		--rc geninfo_all_blocks=1
00:02:23.606  		--rc geninfo_unexecuted_blocks=1
00:02:23.606  		
00:02:23.606  		'
00:02:23.606   22:29:24  -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/nvmf/common.sh
00:02:23.606     22:29:24  -- nvmf/common.sh@7 -- # uname -s
00:02:23.606    22:29:24  -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:02:23.606    22:29:24  -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:02:23.606    22:29:24  -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:02:23.606    22:29:24  -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:02:23.606    22:29:24  -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:02:23.606    22:29:24  -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:02:23.606    22:29:24  -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:02:23.606    22:29:24  -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:02:23.606    22:29:24  -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:02:23.606     22:29:24  -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:02:23.606    22:29:24  -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:808ec059-55a7-e511-906e-0012795d96dd
00:02:23.606    22:29:24  -- nvmf/common.sh@18 -- # NVME_HOSTID=808ec059-55a7-e511-906e-0012795d96dd
00:02:23.606    22:29:24  -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:02:23.606    22:29:24  -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:02:23.606    22:29:24  -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:02:23.606    22:29:24  -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:02:23.606    22:29:24  -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/common.sh
00:02:23.606     22:29:24  -- scripts/common.sh@15 -- # shopt -s extglob
00:02:23.606     22:29:24  -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:02:23.606     22:29:24  -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:02:23.606     22:29:24  -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:02:23.606      22:29:24  -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:02:23.606      22:29:24  -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:02:23.606      22:29:24  -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:02:23.606      22:29:24  -- paths/export.sh@5 -- # export PATH
00:02:23.606      22:29:24  -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:02:23.606    22:29:24  -- nvmf/common.sh@51 -- # : 0
00:02:23.606    22:29:24  -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:02:23.606    22:29:24  -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:02:23.606    22:29:24  -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:02:23.606    22:29:24  -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:02:23.606    22:29:24  -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:02:23.606    22:29:24  -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:02:23.606  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:02:23.606    22:29:24  -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:02:23.606    22:29:24  -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:02:23.606    22:29:24  -- nvmf/common.sh@55 -- # have_pci_nics=0
00:02:23.606   22:29:24  -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']'
00:02:23.606    22:29:24  -- spdk/autotest.sh@32 -- # uname -s
00:02:23.865   22:29:24  -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']'
00:02:23.865   22:29:24  -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h'
00:02:23.865   22:29:24  -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/coredumps
00:02:23.865   22:29:24  -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/core-collector.sh %P %s %t'
00:02:23.865   22:29:24  -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/coredumps
00:02:23.865   22:29:24  -- spdk/autotest.sh@44 -- # modprobe nbd
00:02:23.865    22:29:24  -- spdk/autotest.sh@46 -- # type -P udevadm
00:02:23.865   22:29:24  -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm
00:02:23.865   22:29:24  -- spdk/autotest.sh@48 -- # udevadm_pid=8990
00:02:23.865   22:29:24  -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property
00:02:23.865   22:29:24  -- spdk/autotest.sh@53 -- # start_monitor_resources
00:02:23.865   22:29:24  -- pm/common@17 -- # local monitor
00:02:23.865   22:29:24  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:02:23.865   22:29:24  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:02:23.865    22:29:24  -- pm/common@21 -- # date +%s
00:02:23.865   22:29:24  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:02:23.865   22:29:24  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:02:23.865    22:29:24  -- pm/common@21 -- # date +%s
00:02:23.865   22:29:24  -- pm/common@25 -- # sleep 1
00:02:23.865    22:29:24  -- pm/common@21 -- # date +%s
00:02:23.865   22:29:24  -- pm/common@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733866164
00:02:23.865    22:29:24  -- pm/common@21 -- # date +%s
00:02:23.865   22:29:24  -- pm/common@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733866164
00:02:23.865   22:29:24  -- pm/common@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733866164
00:02:23.865   22:29:24  -- pm/common@21 -- # sudo -E /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733866164
00:02:23.865  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733866164_collect-cpu-load.pm.log
00:02:23.865  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733866164_collect-vmstat.pm.log
00:02:23.865  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733866164_collect-cpu-temp.pm.log
00:02:23.865  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733866164_collect-bmc-pm.bmc.pm.log
00:02:24.805   22:29:25  -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT
00:02:24.805   22:29:25  -- spdk/autotest.sh@57 -- # timing_enter autotest
00:02:24.805   22:29:25  -- common/autotest_common.sh@726 -- # xtrace_disable
00:02:24.805   22:29:25  -- common/autotest_common.sh@10 -- # set +x
00:02:24.805   22:29:25  -- spdk/autotest.sh@59 -- # create_test_list
00:02:24.805   22:29:25  -- common/autotest_common.sh@752 -- # xtrace_disable
00:02:24.806   22:29:25  -- common/autotest_common.sh@10 -- # set +x
00:02:24.806     22:29:25  -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/autotest.sh
00:02:24.806    22:29:25  -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:02:24.806   22:29:25  -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:02:24.806   22:29:25  -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output
00:02:24.806   22:29:25  -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:02:24.806   22:29:25  -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod
00:02:24.806    22:29:25  -- common/autotest_common.sh@1457 -- # uname
00:02:24.806   22:29:25  -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']'
00:02:24.806   22:29:25  -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf
00:02:24.806    22:29:25  -- common/autotest_common.sh@1477 -- # uname
00:02:24.806   22:29:25  -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]]
00:02:24.806   22:29:25  -- spdk/autotest.sh@68 -- # [[ y == y ]]
00:02:24.806   22:29:25  -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version
00:02:24.806  lcov: LCOV version 1.15
00:02:24.806   22:29:25  -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_base.info
00:02:42.886  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found
00:02:42.886  geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno
00:02:48.152   22:29:48  -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup
00:02:48.153   22:29:48  -- common/autotest_common.sh@726 -- # xtrace_disable
00:02:48.153   22:29:48  -- common/autotest_common.sh@10 -- # set +x
00:02:48.153   22:29:48  -- spdk/autotest.sh@78 -- # rm -f
00:02:48.153   22:29:48  -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh reset
00:02:49.086  0000:00:04.7 (8086 6f27): Already using the ioatdma driver
00:02:49.086  0000:00:04.6 (8086 6f26): Already using the ioatdma driver
00:02:49.086  0000:00:04.5 (8086 6f25): Already using the ioatdma driver
00:02:49.086  0000:00:04.4 (8086 6f24): Already using the ioatdma driver
00:02:49.086  0000:00:04.3 (8086 6f23): Already using the ioatdma driver
00:02:49.086  0000:00:04.2 (8086 6f22): Already using the ioatdma driver
00:02:49.086  0000:00:04.1 (8086 6f21): Already using the ioatdma driver
00:02:49.086  0000:00:04.0 (8086 6f20): Already using the ioatdma driver
00:02:49.086  0000:80:04.7 (8086 6f27): Already using the ioatdma driver
00:02:49.086  0000:80:04.6 (8086 6f26): Already using the ioatdma driver
00:02:49.086  0000:80:04.5 (8086 6f25): Already using the ioatdma driver
00:02:49.086  0000:80:04.4 (8086 6f24): Already using the ioatdma driver
00:02:49.345  0000:80:04.3 (8086 6f23): Already using the ioatdma driver
00:02:49.345  0000:80:04.2 (8086 6f22): Already using the ioatdma driver
00:02:49.345  0000:80:04.1 (8086 6f21): Already using the ioatdma driver
00:02:49.345  0000:80:04.0 (8086 6f20): Already using the ioatdma driver
00:02:49.345  0000:0d:00.0 (8086 0a54): Already using the nvme driver
00:02:49.345   22:29:49  -- spdk/autotest.sh@83 -- # get_zoned_devs
00:02:49.345   22:29:49  -- common/autotest_common.sh@1657 -- # zoned_devs=()
00:02:49.345   22:29:49  -- common/autotest_common.sh@1657 -- # local -gA zoned_devs
00:02:49.345   22:29:49  -- common/autotest_common.sh@1658 -- # zoned_ctrls=()
00:02:49.345   22:29:49  -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls
00:02:49.345   22:29:49  -- common/autotest_common.sh@1659 -- # local nvme bdf ns
00:02:49.345   22:29:49  -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme*
00:02:49.345   22:29:49  -- common/autotest_common.sh@1669 -- # bdf=0000:0d:00.0
00:02:49.345   22:29:49  -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:02:49.345   22:29:49  -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1
00:02:49.345   22:29:49  -- common/autotest_common.sh@1650 -- # local device=nvme0n1
00:02:49.345   22:29:49  -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:02:49.345   22:29:49  -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:02:49.345   22:29:49  -- spdk/autotest.sh@85 -- # (( 0 > 0 ))
00:02:49.345   22:29:49  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:02:49.345   22:29:49  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:02:49.345   22:29:49  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1
00:02:49.345   22:29:49  -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt
00:02:49.345   22:29:49  -- scripts/common.sh@390 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1
00:02:49.345  No valid GPT data, bailing
00:02:49.345    22:29:50  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:02:49.345   22:29:50  -- scripts/common.sh@394 -- # pt=
00:02:49.345   22:29:50  -- scripts/common.sh@395 -- # return 1
00:02:49.345   22:29:50  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1
00:02:49.345  1+0 records in
00:02:49.345  1+0 records out
00:02:49.345  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00228147 s, 460 MB/s
00:02:49.345   22:29:50  -- spdk/autotest.sh@105 -- # sync
00:02:49.345   22:29:50  -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes
00:02:49.345   22:29:50  -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null'
00:02:49.345    22:29:50  -- common/autotest_common.sh@22 -- # reap_spdk_processes
00:02:51.875    22:29:52  -- spdk/autotest.sh@111 -- # uname -s
00:02:51.875   22:29:52  -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]]
00:02:51.875   22:29:52  -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]]
00:02:51.875   22:29:52  -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh status
00:02:53.249  Hugepages
00:02:53.249  node     hugesize     free /  total
00:02:53.249  node0   1048576kB        0 /      0
00:02:53.249  node0      2048kB        0 /      0
00:02:53.249  node1   1048576kB        0 /      0
00:02:53.249  node1      2048kB        0 /      0
00:02:53.249  
00:02:53.249  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:02:53.249  I/OAT                     0000:00:04.0    8086   6f20   0       ioatdma          -          -
00:02:53.249  I/OAT                     0000:00:04.1    8086   6f21   0       ioatdma          -          -
00:02:53.249  I/OAT                     0000:00:04.2    8086   6f22   0       ioatdma          -          -
00:02:53.249  I/OAT                     0000:00:04.3    8086   6f23   0       ioatdma          -          -
00:02:53.249  I/OAT                     0000:00:04.4    8086   6f24   0       ioatdma          -          -
00:02:53.249  I/OAT                     0000:00:04.5    8086   6f25   0       ioatdma          -          -
00:02:53.249  I/OAT                     0000:00:04.6    8086   6f26   0       ioatdma          -          -
00:02:53.249  I/OAT                     0000:00:04.7    8086   6f27   0       ioatdma          -          -
00:02:53.249  NVMe                      0000:0d:00.0    8086   0a54   0       nvme             nvme0      nvme0n1
00:02:53.249  I/OAT                     0000:80:04.0    8086   6f20   1       ioatdma          -          -
00:02:53.249  I/OAT                     0000:80:04.1    8086   6f21   1       ioatdma          -          -
00:02:53.249  I/OAT                     0000:80:04.2    8086   6f22   1       ioatdma          -          -
00:02:53.249  I/OAT                     0000:80:04.3    8086   6f23   1       ioatdma          -          -
00:02:53.249  I/OAT                     0000:80:04.4    8086   6f24   1       ioatdma          -          -
00:02:53.249  I/OAT                     0000:80:04.5    8086   6f25   1       ioatdma          -          -
00:02:53.249  I/OAT                     0000:80:04.6    8086   6f26   1       ioatdma          -          -
00:02:53.249  I/OAT                     0000:80:04.7    8086   6f27   1       ioatdma          -          -
00:02:53.249    22:29:53  -- spdk/autotest.sh@117 -- # uname -s
00:02:53.249   22:29:53  -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]]
00:02:53.249   22:29:53  -- spdk/autotest.sh@119 -- # nvme_namespace_revert
00:02:53.249   22:29:53  -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh
00:02:54.184  0000:00:04.7 (8086 6f27): ioatdma -> vfio-pci
00:02:54.184  0000:00:04.6 (8086 6f26): ioatdma -> vfio-pci
00:02:54.184  0000:00:04.5 (8086 6f25): ioatdma -> vfio-pci
00:02:54.184  0000:00:04.4 (8086 6f24): ioatdma -> vfio-pci
00:02:54.184  0000:00:04.3 (8086 6f23): ioatdma -> vfio-pci
00:02:54.185  0000:00:04.2 (8086 6f22): ioatdma -> vfio-pci
00:02:54.185  0000:00:04.1 (8086 6f21): ioatdma -> vfio-pci
00:02:54.185  0000:00:04.0 (8086 6f20): ioatdma -> vfio-pci
00:02:54.185  0000:80:04.7 (8086 6f27): ioatdma -> vfio-pci
00:02:54.185  0000:80:04.6 (8086 6f26): ioatdma -> vfio-pci
00:02:54.185  0000:80:04.5 (8086 6f25): ioatdma -> vfio-pci
00:02:54.185  0000:80:04.4 (8086 6f24): ioatdma -> vfio-pci
00:02:54.185  0000:80:04.3 (8086 6f23): ioatdma -> vfio-pci
00:02:54.185  0000:80:04.2 (8086 6f22): ioatdma -> vfio-pci
00:02:54.185  0000:80:04.1 (8086 6f21): ioatdma -> vfio-pci
00:02:54.185  0000:80:04.0 (8086 6f20): ioatdma -> vfio-pci
00:02:55.120  0000:0d:00.0 (8086 0a54): nvme -> vfio-pci
00:02:55.380   22:29:55  -- common/autotest_common.sh@1517 -- # sleep 1
00:02:56.316   22:29:56  -- common/autotest_common.sh@1518 -- # bdfs=()
00:02:56.316   22:29:56  -- common/autotest_common.sh@1518 -- # local bdfs
00:02:56.316   22:29:56  -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs))
00:02:56.316    22:29:56  -- common/autotest_common.sh@1520 -- # get_nvme_bdfs
00:02:56.316    22:29:56  -- common/autotest_common.sh@1498 -- # bdfs=()
00:02:56.316    22:29:56  -- common/autotest_common.sh@1498 -- # local bdfs
00:02:56.316    22:29:56  -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:02:56.316     22:29:56  -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/gen_nvme.sh
00:02:56.316     22:29:56  -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:02:56.316    22:29:57  -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:02:56.316    22:29:57  -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0d:00.0
00:02:56.316   22:29:57  -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh reset
00:02:57.691  Waiting for block devices as requested
00:02:57.691  0000:00:04.7 (8086 6f27): vfio-pci -> ioatdma
00:02:57.691  0000:00:04.6 (8086 6f26): vfio-pci -> ioatdma
00:02:57.691  0000:00:04.5 (8086 6f25): vfio-pci -> ioatdma
00:02:57.691  0000:00:04.4 (8086 6f24): vfio-pci -> ioatdma
00:02:57.691  0000:00:04.3 (8086 6f23): vfio-pci -> ioatdma
00:02:57.691  0000:00:04.2 (8086 6f22): vfio-pci -> ioatdma
00:02:57.948  0000:00:04.1 (8086 6f21): vfio-pci -> ioatdma
00:02:57.948  0000:00:04.0 (8086 6f20): vfio-pci -> ioatdma
00:02:57.948  0000:80:04.7 (8086 6f27): vfio-pci -> ioatdma
00:02:57.948  0000:80:04.6 (8086 6f26): vfio-pci -> ioatdma
00:02:58.207  0000:80:04.5 (8086 6f25): vfio-pci -> ioatdma
00:02:58.207  0000:80:04.4 (8086 6f24): vfio-pci -> ioatdma
00:02:58.207  0000:80:04.3 (8086 6f23): vfio-pci -> ioatdma
00:02:58.207  0000:80:04.2 (8086 6f22): vfio-pci -> ioatdma
00:02:58.465  0000:80:04.1 (8086 6f21): vfio-pci -> ioatdma
00:02:58.465  0000:80:04.0 (8086 6f20): vfio-pci -> ioatdma
00:02:58.465  0000:0d:00.0 (8086 0a54): vfio-pci -> nvme
00:02:58.723   22:29:59  -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}"
00:02:58.723    22:29:59  -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:0d:00.0
00:02:58.723     22:29:59  -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0
00:02:58.723     22:29:59  -- common/autotest_common.sh@1487 -- # grep 0000:0d:00.0/nvme/nvme
00:02:58.723    22:29:59  -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:03.2/0000:0d:00.0/nvme/nvme0
00:02:58.723    22:29:59  -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:03.2/0000:0d:00.0/nvme/nvme0 ]]
00:02:58.723     22:29:59  -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:03.2/0000:0d:00.0/nvme/nvme0
00:02:58.723    22:29:59  -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0
00:02:58.723   22:29:59  -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0
00:02:58.723   22:29:59  -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]]
00:02:58.723    22:29:59  -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0
00:02:58.723    22:29:59  -- common/autotest_common.sh@1531 -- # grep oacs
00:02:58.723    22:29:59  -- common/autotest_common.sh@1531 -- # cut -d: -f2
00:02:58.723   22:29:59  -- common/autotest_common.sh@1531 -- # oacs=' 0xf'
00:02:58.723   22:29:59  -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8
00:02:58.723   22:29:59  -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]]
00:02:58.723    22:29:59  -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0
00:02:58.723    22:29:59  -- common/autotest_common.sh@1540 -- # cut -d: -f2
00:02:58.723    22:29:59  -- common/autotest_common.sh@1540 -- # grep unvmcap
00:02:58.723   22:29:59  -- common/autotest_common.sh@1540 -- # unvmcap=' 0'
00:02:58.723   22:29:59  -- common/autotest_common.sh@1541 -- # [[  0 -eq 0 ]]
00:02:58.723   22:29:59  -- common/autotest_common.sh@1543 -- # continue
00:02:58.723   22:29:59  -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup
00:02:58.723   22:29:59  -- common/autotest_common.sh@732 -- # xtrace_disable
00:02:58.723   22:29:59  -- common/autotest_common.sh@10 -- # set +x
00:02:58.723   22:29:59  -- spdk/autotest.sh@125 -- # timing_enter afterboot
00:02:58.723   22:29:59  -- common/autotest_common.sh@726 -- # xtrace_disable
00:02:58.723   22:29:59  -- common/autotest_common.sh@10 -- # set +x
00:02:58.723   22:29:59  -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh
00:03:00.098  0000:00:04.7 (8086 6f27): ioatdma -> vfio-pci
00:03:00.098  0000:00:04.6 (8086 6f26): ioatdma -> vfio-pci
00:03:00.098  0000:00:04.5 (8086 6f25): ioatdma -> vfio-pci
00:03:00.098  0000:00:04.4 (8086 6f24): ioatdma -> vfio-pci
00:03:00.098  0000:00:04.3 (8086 6f23): ioatdma -> vfio-pci
00:03:00.098  0000:00:04.2 (8086 6f22): ioatdma -> vfio-pci
00:03:00.098  0000:00:04.1 (8086 6f21): ioatdma -> vfio-pci
00:03:00.098  0000:00:04.0 (8086 6f20): ioatdma -> vfio-pci
00:03:00.098  0000:80:04.7 (8086 6f27): ioatdma -> vfio-pci
00:03:00.098  0000:80:04.6 (8086 6f26): ioatdma -> vfio-pci
00:03:00.098  0000:80:04.5 (8086 6f25): ioatdma -> vfio-pci
00:03:00.098  0000:80:04.4 (8086 6f24): ioatdma -> vfio-pci
00:03:00.098  0000:80:04.3 (8086 6f23): ioatdma -> vfio-pci
00:03:00.098  0000:80:04.2 (8086 6f22): ioatdma -> vfio-pci
00:03:00.098  0000:80:04.1 (8086 6f21): ioatdma -> vfio-pci
00:03:00.098  0000:80:04.0 (8086 6f20): ioatdma -> vfio-pci
00:03:01.033  0000:0d:00.0 (8086 0a54): nvme -> vfio-pci
00:03:01.033   22:30:01  -- spdk/autotest.sh@127 -- # timing_exit afterboot
00:03:01.033   22:30:01  -- common/autotest_common.sh@732 -- # xtrace_disable
00:03:01.033   22:30:01  -- common/autotest_common.sh@10 -- # set +x
00:03:01.033   22:30:01  -- spdk/autotest.sh@131 -- # opal_revert_cleanup
00:03:01.033   22:30:01  -- common/autotest_common.sh@1578 -- # mapfile -t bdfs
00:03:01.033    22:30:01  -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54
00:03:01.033    22:30:01  -- common/autotest_common.sh@1563 -- # bdfs=()
00:03:01.033    22:30:01  -- common/autotest_common.sh@1563 -- # _bdfs=()
00:03:01.033    22:30:01  -- common/autotest_common.sh@1563 -- # local bdfs _bdfs
00:03:01.033    22:30:01  -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs))
00:03:01.033     22:30:01  -- common/autotest_common.sh@1564 -- # get_nvme_bdfs
00:03:01.033     22:30:01  -- common/autotest_common.sh@1498 -- # bdfs=()
00:03:01.033     22:30:01  -- common/autotest_common.sh@1498 -- # local bdfs
00:03:01.033     22:30:01  -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:03:01.033      22:30:01  -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/gen_nvme.sh
00:03:01.033      22:30:01  -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:03:01.033     22:30:01  -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:03:01.033     22:30:01  -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0d:00.0
00:03:01.033    22:30:01  -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}"
00:03:01.033     22:30:01  -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:0d:00.0/device
00:03:01.033    22:30:01  -- common/autotest_common.sh@1566 -- # device=0x0a54
00:03:01.033    22:30:01  -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]]
00:03:01.033    22:30:01  -- common/autotest_common.sh@1568 -- # bdfs+=($bdf)
00:03:01.033    22:30:01  -- common/autotest_common.sh@1572 -- # (( 1 > 0 ))
00:03:01.033    22:30:01  -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:0d:00.0
00:03:01.033   22:30:01  -- common/autotest_common.sh@1579 -- # [[ -z 0000:0d:00.0 ]]
00:03:01.033   22:30:01  -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=20261
00:03:01.033   22:30:01  -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:03:01.033   22:30:01  -- common/autotest_common.sh@1585 -- # waitforlisten 20261
00:03:01.033   22:30:01  -- common/autotest_common.sh@835 -- # '[' -z 20261 ']'
00:03:01.033   22:30:01  -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:03:01.033   22:30:01  -- common/autotest_common.sh@840 -- # local max_retries=100
00:03:01.033   22:30:01  -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:03:01.033  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:03:01.033   22:30:01  -- common/autotest_common.sh@844 -- # xtrace_disable
00:03:01.033   22:30:01  -- common/autotest_common.sh@10 -- # set +x
00:03:01.292  [2024-12-10 22:30:01.829116] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:03:01.292  [2024-12-10 22:30:01.829233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid20261 ]
00:03:01.292  [2024-12-10 22:30:01.961441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:03:01.550  [2024-12-10 22:30:02.101212] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:03:02.485   22:30:03  -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:03:02.485   22:30:03  -- common/autotest_common.sh@868 -- # return 0
00:03:02.485   22:30:03  -- common/autotest_common.sh@1587 -- # bdf_id=0
00:03:02.485   22:30:03  -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}"
00:03:02.485   22:30:03  -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:0d:00.0
00:03:05.769  nvme0n1
00:03:05.769   22:30:06  -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test
00:03:05.769  [2024-12-10 22:30:06.401438] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18
00:03:05.769  [2024-12-10 22:30:06.401508] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18
00:03:05.769  request:
00:03:05.769  {
00:03:05.769    "nvme_ctrlr_name": "nvme0",
00:03:05.769    "password": "test",
00:03:05.769    "method": "bdev_nvme_opal_revert",
00:03:05.769    "req_id": 1
00:03:05.769  }
00:03:05.769  Got JSON-RPC error response
00:03:05.769  response:
00:03:05.769  {
00:03:05.769    "code": -32603,
00:03:05.769    "message": "Internal error"
00:03:05.769  }
00:03:05.769   22:30:06  -- common/autotest_common.sh@1591 -- # true
00:03:05.769   22:30:06  -- common/autotest_common.sh@1592 -- # (( ++bdf_id ))
00:03:05.769   22:30:06  -- common/autotest_common.sh@1595 -- # killprocess 20261
00:03:05.769   22:30:06  -- common/autotest_common.sh@954 -- # '[' -z 20261 ']'
00:03:05.769   22:30:06  -- common/autotest_common.sh@958 -- # kill -0 20261
00:03:05.769    22:30:06  -- common/autotest_common.sh@959 -- # uname
00:03:05.769   22:30:06  -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:03:05.769    22:30:06  -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 20261
00:03:05.769   22:30:06  -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:03:05.770   22:30:06  -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:03:05.770   22:30:06  -- common/autotest_common.sh@972 -- # echo 'killing process with pid 20261'
00:03:05.770  killing process with pid 20261
00:03:05.770   22:30:06  -- common/autotest_common.sh@973 -- # kill 20261
00:03:05.770   22:30:06  -- common/autotest_common.sh@978 -- # wait 20261
00:03:09.959   22:30:10  -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']'
00:03:09.959   22:30:10  -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']'
00:03:09.959   22:30:10  -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]]
00:03:09.959   22:30:10  -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]]
00:03:09.959   22:30:10  -- spdk/autotest.sh@149 -- # timing_enter lib
00:03:09.959   22:30:10  -- common/autotest_common.sh@726 -- # xtrace_disable
00:03:09.959   22:30:10  -- common/autotest_common.sh@10 -- # set +x
00:03:09.959   22:30:10  -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]]
00:03:09.959   22:30:10  -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/env.sh
00:03:09.959   22:30:10  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:09.959   22:30:10  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:09.959   22:30:10  -- common/autotest_common.sh@10 -- # set +x
00:03:09.959  ************************************
00:03:09.959  START TEST env
00:03:09.959  ************************************
00:03:09.959   22:30:10 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/env.sh
00:03:09.959  * Looking for test storage...
00:03:09.959  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env
00:03:09.959    22:30:10 env -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:03:09.959     22:30:10 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:03:09.959     22:30:10 env -- common/autotest_common.sh@1711 -- # lcov --version
00:03:09.959    22:30:10 env -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:03:09.959    22:30:10 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:03:09.959    22:30:10 env -- scripts/common.sh@333 -- # local ver1 ver1_l
00:03:09.959    22:30:10 env -- scripts/common.sh@334 -- # local ver2 ver2_l
00:03:09.959    22:30:10 env -- scripts/common.sh@336 -- # IFS=.-:
00:03:09.959    22:30:10 env -- scripts/common.sh@336 -- # read -ra ver1
00:03:09.959    22:30:10 env -- scripts/common.sh@337 -- # IFS=.-:
00:03:09.959    22:30:10 env -- scripts/common.sh@337 -- # read -ra ver2
00:03:09.959    22:30:10 env -- scripts/common.sh@338 -- # local 'op=<'
00:03:09.959    22:30:10 env -- scripts/common.sh@340 -- # ver1_l=2
00:03:09.959    22:30:10 env -- scripts/common.sh@341 -- # ver2_l=1
00:03:09.959    22:30:10 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:03:09.959    22:30:10 env -- scripts/common.sh@344 -- # case "$op" in
00:03:09.959    22:30:10 env -- scripts/common.sh@345 -- # : 1
00:03:09.959    22:30:10 env -- scripts/common.sh@364 -- # (( v = 0 ))
00:03:09.959    22:30:10 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:03:09.959     22:30:10 env -- scripts/common.sh@365 -- # decimal 1
00:03:09.959     22:30:10 env -- scripts/common.sh@353 -- # local d=1
00:03:09.959     22:30:10 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:03:09.959     22:30:10 env -- scripts/common.sh@355 -- # echo 1
00:03:09.959    22:30:10 env -- scripts/common.sh@365 -- # ver1[v]=1
00:03:09.959     22:30:10 env -- scripts/common.sh@366 -- # decimal 2
00:03:09.959     22:30:10 env -- scripts/common.sh@353 -- # local d=2
00:03:09.959     22:30:10 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:03:09.959     22:30:10 env -- scripts/common.sh@355 -- # echo 2
00:03:09.959    22:30:10 env -- scripts/common.sh@366 -- # ver2[v]=2
00:03:09.959    22:30:10 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:03:09.959    22:30:10 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:03:09.959    22:30:10 env -- scripts/common.sh@368 -- # return 0
00:03:09.959    22:30:10 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:03:09.960    22:30:10 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:03:09.960  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:09.960  		--rc genhtml_branch_coverage=1
00:03:09.960  		--rc genhtml_function_coverage=1
00:03:09.960  		--rc genhtml_legend=1
00:03:09.960  		--rc geninfo_all_blocks=1
00:03:09.960  		--rc geninfo_unexecuted_blocks=1
00:03:09.960  		
00:03:09.960  		'
00:03:09.960    22:30:10 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:03:09.960  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:09.960  		--rc genhtml_branch_coverage=1
00:03:09.960  		--rc genhtml_function_coverage=1
00:03:09.960  		--rc genhtml_legend=1
00:03:09.960  		--rc geninfo_all_blocks=1
00:03:09.960  		--rc geninfo_unexecuted_blocks=1
00:03:09.960  		
00:03:09.960  		'
00:03:09.960    22:30:10 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:03:09.960  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:09.960  		--rc genhtml_branch_coverage=1
00:03:09.960  		--rc genhtml_function_coverage=1
00:03:09.960  		--rc genhtml_legend=1
00:03:09.960  		--rc geninfo_all_blocks=1
00:03:09.960  		--rc geninfo_unexecuted_blocks=1
00:03:09.960  		
00:03:09.960  		'
00:03:09.960    22:30:10 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:03:09.960  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:09.960  		--rc genhtml_branch_coverage=1
00:03:09.960  		--rc genhtml_function_coverage=1
00:03:09.960  		--rc genhtml_legend=1
00:03:09.960  		--rc geninfo_all_blocks=1
00:03:09.960  		--rc geninfo_unexecuted_blocks=1
00:03:09.960  		
00:03:09.960  		'
00:03:09.960   22:30:10 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/memory/memory_ut
00:03:09.960   22:30:10 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:09.960   22:30:10 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:09.960   22:30:10 env -- common/autotest_common.sh@10 -- # set +x
00:03:09.960  ************************************
00:03:09.960  START TEST env_memory
00:03:09.960  ************************************
00:03:09.960   22:30:10 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/memory/memory_ut
00:03:09.960  
00:03:09.960  
00:03:09.960       CUnit - A unit testing framework for C - Version 2.1-3
00:03:09.960       http://cunit.sourceforge.net/
00:03:09.960  
00:03:09.960  
00:03:09.960  Suite: memory
00:03:09.960    Test: alloc and free memory map ...[2024-12-10 22:30:10.589335] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed
00:03:09.960  passed
00:03:09.960    Test: mem map translation ...[2024-12-10 22:30:10.627761] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234
00:03:09.960  [2024-12-10 22:30:10.627793] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152
00:03:09.960  [2024-12-10 22:30:10.627864] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656
00:03:09.960  [2024-12-10 22:30:10.627882] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map
00:03:09.960  passed
00:03:09.960    Test: mem map registration ...[2024-12-10 22:30:10.691502] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234
00:03:09.960  [2024-12-10 22:30:10.691532] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152
00:03:09.960  passed
00:03:10.220    Test: mem map adjacent registrations ...passed
00:03:10.220  
00:03:10.220  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:10.220                suites      1      1    n/a      0        0
00:03:10.220                 tests      4      4      4      0        0
00:03:10.220               asserts    152    152    152      0      n/a
00:03:10.220  
00:03:10.220  Elapsed time =    0.234 seconds
00:03:10.220  
00:03:10.220  real	0m0.256s
00:03:10.220  user	0m0.240s
00:03:10.220  sys	0m0.015s
00:03:10.220   22:30:10 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:10.220   22:30:10 env.env_memory -- common/autotest_common.sh@10 -- # set +x
00:03:10.220  ************************************
00:03:10.220  END TEST env_memory
00:03:10.220  ************************************
00:03:10.220   22:30:10 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/vtophys/vtophys
00:03:10.220   22:30:10 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:10.220   22:30:10 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:10.220   22:30:10 env -- common/autotest_common.sh@10 -- # set +x
00:03:10.220  ************************************
00:03:10.220  START TEST env_vtophys
00:03:10.220  ************************************
00:03:10.220   22:30:10 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/vtophys/vtophys
00:03:10.220  EAL: lib.eal log level changed from notice to debug
00:03:10.220  EAL: Detected lcore 0 as core 0 on socket 0
00:03:10.220  EAL: Detected lcore 1 as core 1 on socket 0
00:03:10.220  EAL: Detected lcore 2 as core 2 on socket 0
00:03:10.220  EAL: Detected lcore 3 as core 3 on socket 0
00:03:10.220  EAL: Detected lcore 4 as core 4 on socket 0
00:03:10.220  EAL: Detected lcore 5 as core 5 on socket 0
00:03:10.220  EAL: Detected lcore 6 as core 8 on socket 0
00:03:10.220  EAL: Detected lcore 7 as core 9 on socket 0
00:03:10.220  EAL: Detected lcore 8 as core 10 on socket 0
00:03:10.220  EAL: Detected lcore 9 as core 11 on socket 0
00:03:10.220  EAL: Detected lcore 10 as core 12 on socket 0
00:03:10.220  EAL: Detected lcore 11 as core 16 on socket 0
00:03:10.220  EAL: Detected lcore 12 as core 17 on socket 0
00:03:10.220  EAL: Detected lcore 13 as core 18 on socket 0
00:03:10.220  EAL: Detected lcore 14 as core 19 on socket 0
00:03:10.220  EAL: Detected lcore 15 as core 20 on socket 0
00:03:10.220  EAL: Detected lcore 16 as core 21 on socket 0
00:03:10.220  EAL: Detected lcore 17 as core 24 on socket 0
00:03:10.220  EAL: Detected lcore 18 as core 25 on socket 0
00:03:10.220  EAL: Detected lcore 19 as core 26 on socket 0
00:03:10.220  EAL: Detected lcore 20 as core 27 on socket 0
00:03:10.220  EAL: Detected lcore 21 as core 28 on socket 0
00:03:10.220  EAL: Detected lcore 22 as core 0 on socket 1
00:03:10.220  EAL: Detected lcore 23 as core 1 on socket 1
00:03:10.220  EAL: Detected lcore 24 as core 2 on socket 1
00:03:10.220  EAL: Detected lcore 25 as core 3 on socket 1
00:03:10.220  EAL: Detected lcore 26 as core 4 on socket 1
00:03:10.220  EAL: Detected lcore 27 as core 5 on socket 1
00:03:10.220  EAL: Detected lcore 28 as core 8 on socket 1
00:03:10.220  EAL: Detected lcore 29 as core 9 on socket 1
00:03:10.220  EAL: Detected lcore 30 as core 10 on socket 1
00:03:10.220  EAL: Detected lcore 31 as core 11 on socket 1
00:03:10.220  EAL: Detected lcore 32 as core 12 on socket 1
00:03:10.220  EAL: Detected lcore 33 as core 16 on socket 1
00:03:10.220  EAL: Detected lcore 34 as core 17 on socket 1
00:03:10.220  EAL: Detected lcore 35 as core 18 on socket 1
00:03:10.220  EAL: Detected lcore 36 as core 19 on socket 1
00:03:10.220  EAL: Detected lcore 37 as core 20 on socket 1
00:03:10.220  EAL: Detected lcore 38 as core 21 on socket 1
00:03:10.220  EAL: Detected lcore 39 as core 24 on socket 1
00:03:10.220  EAL: Detected lcore 40 as core 25 on socket 1
00:03:10.220  EAL: Detected lcore 41 as core 26 on socket 1
00:03:10.220  EAL: Detected lcore 42 as core 27 on socket 1
00:03:10.220  EAL: Detected lcore 43 as core 28 on socket 1
00:03:10.220  EAL: Detected lcore 44 as core 0 on socket 0
00:03:10.220  EAL: Detected lcore 45 as core 1 on socket 0
00:03:10.220  EAL: Detected lcore 46 as core 2 on socket 0
00:03:10.220  EAL: Detected lcore 47 as core 3 on socket 0
00:03:10.220  EAL: Detected lcore 48 as core 4 on socket 0
00:03:10.220  EAL: Detected lcore 49 as core 5 on socket 0
00:03:10.220  EAL: Detected lcore 50 as core 8 on socket 0
00:03:10.220  EAL: Detected lcore 51 as core 9 on socket 0
00:03:10.220  EAL: Detected lcore 52 as core 10 on socket 0
00:03:10.220  EAL: Detected lcore 53 as core 11 on socket 0
00:03:10.220  EAL: Detected lcore 54 as core 12 on socket 0
00:03:10.220  EAL: Detected lcore 55 as core 16 on socket 0
00:03:10.220  EAL: Detected lcore 56 as core 17 on socket 0
00:03:10.220  EAL: Detected lcore 57 as core 18 on socket 0
00:03:10.220  EAL: Detected lcore 58 as core 19 on socket 0
00:03:10.220  EAL: Detected lcore 59 as core 20 on socket 0
00:03:10.220  EAL: Detected lcore 60 as core 21 on socket 0
00:03:10.220  EAL: Detected lcore 61 as core 24 on socket 0
00:03:10.220  EAL: Detected lcore 62 as core 25 on socket 0
00:03:10.220  EAL: Detected lcore 63 as core 26 on socket 0
00:03:10.220  EAL: Detected lcore 64 as core 27 on socket 0
00:03:10.220  EAL: Detected lcore 65 as core 28 on socket 0
00:03:10.220  EAL: Detected lcore 66 as core 0 on socket 1
00:03:10.220  EAL: Detected lcore 67 as core 1 on socket 1
00:03:10.220  EAL: Detected lcore 68 as core 2 on socket 1
00:03:10.220  EAL: Detected lcore 69 as core 3 on socket 1
00:03:10.220  EAL: Detected lcore 70 as core 4 on socket 1
00:03:10.220  EAL: Detected lcore 71 as core 5 on socket 1
00:03:10.220  EAL: Detected lcore 72 as core 8 on socket 1
00:03:10.220  EAL: Detected lcore 73 as core 9 on socket 1
00:03:10.220  EAL: Detected lcore 74 as core 10 on socket 1
00:03:10.220  EAL: Detected lcore 75 as core 11 on socket 1
00:03:10.220  EAL: Detected lcore 76 as core 12 on socket 1
00:03:10.220  EAL: Detected lcore 77 as core 16 on socket 1
00:03:10.220  EAL: Detected lcore 78 as core 17 on socket 1
00:03:10.220  EAL: Detected lcore 79 as core 18 on socket 1
00:03:10.220  EAL: Detected lcore 80 as core 19 on socket 1
00:03:10.220  EAL: Detected lcore 81 as core 20 on socket 1
00:03:10.220  EAL: Detected lcore 82 as core 21 on socket 1
00:03:10.220  EAL: Detected lcore 83 as core 24 on socket 1
00:03:10.220  EAL: Detected lcore 84 as core 25 on socket 1
00:03:10.220  EAL: Detected lcore 85 as core 26 on socket 1
00:03:10.220  EAL: Detected lcore 86 as core 27 on socket 1
00:03:10.220  EAL: Detected lcore 87 as core 28 on socket 1
00:03:10.220  EAL: Maximum logical cores by configuration: 128
00:03:10.220  EAL: Detected CPU lcores: 88
00:03:10.220  EAL: Detected NUMA nodes: 2
00:03:10.220  EAL: Checking presence of .so 'librte_eal.so.24.1'
00:03:10.220  EAL: Detected shared linkage of DPDK
00:03:10.220  EAL: No shared files mode enabled, IPC will be disabled
00:03:10.220  EAL: No shared files mode enabled, IPC is disabled
00:03:10.220  EAL: Bus pci wants IOVA as 'DC'
00:03:10.220  EAL: Bus auxiliary wants IOVA as 'DC'
00:03:10.220  EAL: Bus vdev wants IOVA as 'DC'
00:03:10.220  EAL: Buses did not request a specific IOVA mode.
00:03:10.220  EAL: IOMMU is available, selecting IOVA as VA mode.
00:03:10.220  EAL: Selected IOVA mode 'VA'
00:03:10.220  EAL: Probing VFIO support...
00:03:10.220  EAL: IOMMU type 1 (Type 1) is supported
00:03:10.220  EAL: IOMMU type 7 (sPAPR) is not supported
00:03:10.220  EAL: IOMMU type 8 (No-IOMMU) is not supported
00:03:10.220  EAL: VFIO support initialized
00:03:10.220  EAL: Ask a virtual area of 0x2e000 bytes
00:03:10.220  EAL: Virtual area found at 0x200000000000 (size = 0x2e000)
00:03:10.220  EAL: Setting up physically contiguous memory...
00:03:10.220  EAL: Setting maximum number of open files to 524288
00:03:10.220  EAL: Detected memory type: socket_id:0 hugepage_sz:2097152
00:03:10.220  EAL: Detected memory type: socket_id:1 hugepage_sz:2097152
00:03:10.220  EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152
00:03:10.220  EAL: Ask a virtual area of 0x61000 bytes
00:03:10.220  EAL: Virtual area found at 0x20000002e000 (size = 0x61000)
00:03:10.220  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:03:10.220  EAL: Ask a virtual area of 0x400000000 bytes
00:03:10.220  EAL: Virtual area found at 0x200000200000 (size = 0x400000000)
00:03:10.220  EAL: VA reserved for memseg list at 0x200000200000, size 400000000
00:03:10.220  EAL: Ask a virtual area of 0x61000 bytes
00:03:10.220  EAL: Virtual area found at 0x200400200000 (size = 0x61000)
00:03:10.220  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:03:10.220  EAL: Ask a virtual area of 0x400000000 bytes
00:03:10.220  EAL: Virtual area found at 0x200400400000 (size = 0x400000000)
00:03:10.220  EAL: VA reserved for memseg list at 0x200400400000, size 400000000
00:03:10.220  EAL: Ask a virtual area of 0x61000 bytes
00:03:10.220  EAL: Virtual area found at 0x200800400000 (size = 0x61000)
00:03:10.220  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:03:10.220  EAL: Ask a virtual area of 0x400000000 bytes
00:03:10.220  EAL: Virtual area found at 0x200800600000 (size = 0x400000000)
00:03:10.220  EAL: VA reserved for memseg list at 0x200800600000, size 400000000
00:03:10.220  EAL: Ask a virtual area of 0x61000 bytes
00:03:10.220  EAL: Virtual area found at 0x200c00600000 (size = 0x61000)
00:03:10.220  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:03:10.220  EAL: Ask a virtual area of 0x400000000 bytes
00:03:10.220  EAL: Virtual area found at 0x200c00800000 (size = 0x400000000)
00:03:10.220  EAL: VA reserved for memseg list at 0x200c00800000, size 400000000
00:03:10.220  EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152
00:03:10.220  EAL: Ask a virtual area of 0x61000 bytes
00:03:10.220  EAL: Virtual area found at 0x201000800000 (size = 0x61000)
00:03:10.220  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:03:10.220  EAL: Ask a virtual area of 0x400000000 bytes
00:03:10.220  EAL: Virtual area found at 0x201000a00000 (size = 0x400000000)
00:03:10.220  EAL: VA reserved for memseg list at 0x201000a00000, size 400000000
00:03:10.220  EAL: Ask a virtual area of 0x61000 bytes
00:03:10.220  EAL: Virtual area found at 0x201400a00000 (size = 0x61000)
00:03:10.220  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:03:10.221  EAL: Ask a virtual area of 0x400000000 bytes
00:03:10.221  EAL: Virtual area found at 0x201400c00000 (size = 0x400000000)
00:03:10.221  EAL: VA reserved for memseg list at 0x201400c00000, size 400000000
00:03:10.221  EAL: Ask a virtual area of 0x61000 bytes
00:03:10.221  EAL: Virtual area found at 0x201800c00000 (size = 0x61000)
00:03:10.221  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:03:10.221  EAL: Ask a virtual area of 0x400000000 bytes
00:03:10.221  EAL: Virtual area found at 0x201800e00000 (size = 0x400000000)
00:03:10.221  EAL: VA reserved for memseg list at 0x201800e00000, size 400000000
00:03:10.221  EAL: Ask a virtual area of 0x61000 bytes
00:03:10.221  EAL: Virtual area found at 0x201c00e00000 (size = 0x61000)
00:03:10.221  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:03:10.221  EAL: Ask a virtual area of 0x400000000 bytes
00:03:10.221  EAL: Virtual area found at 0x201c01000000 (size = 0x400000000)
00:03:10.221  EAL: VA reserved for memseg list at 0x201c01000000, size 400000000
00:03:10.221  EAL: Hugepages will be freed exactly as allocated.
00:03:10.221  EAL: No shared files mode enabled, IPC is disabled
00:03:10.221  EAL: No shared files mode enabled, IPC is disabled
00:03:10.221  EAL: TSC frequency is ~2200000 KHz
00:03:10.221  EAL: Main lcore 0 is ready (tid=7f22c90bab40;cpuset=[0])
00:03:10.221  EAL: Trying to obtain current memory policy.
00:03:10.221  EAL: Setting policy MPOL_PREFERRED for socket 0
00:03:10.221  EAL: Restoring previous memory policy: 0
00:03:10.221  EAL: request: mp_malloc_sync
00:03:10.221  EAL: No shared files mode enabled, IPC is disabled
00:03:10.221  EAL: Heap on socket 0 was expanded by 2MB
00:03:10.221  EAL: No shared files mode enabled, IPC is disabled
00:03:10.221  EAL: No shared files mode enabled, IPC is disabled
00:03:10.221  EAL: No PCI address specified using 'addr=<id>' in: bus=pci
00:03:10.221  EAL: Mem event callback 'spdk:(nil)' registered
00:03:10.221  
00:03:10.221  
00:03:10.221       CUnit - A unit testing framework for C - Version 2.1-3
00:03:10.221       http://cunit.sourceforge.net/
00:03:10.221  
00:03:10.221  
00:03:10.221  Suite: components_suite
00:03:10.789    Test: vtophys_malloc_test ...passed
00:03:10.789    Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy.
00:03:10.789  EAL: Setting policy MPOL_PREFERRED for socket 0
00:03:10.789  EAL: Restoring previous memory policy: 4
00:03:10.789  EAL: Calling mem event callback 'spdk:(nil)'
00:03:10.789  EAL: request: mp_malloc_sync
00:03:10.789  EAL: No shared files mode enabled, IPC is disabled
00:03:10.789  EAL: Heap on socket 0 was expanded by 4MB
00:03:10.789  EAL: Calling mem event callback 'spdk:(nil)'
00:03:10.789  EAL: request: mp_malloc_sync
00:03:10.789  EAL: No shared files mode enabled, IPC is disabled
00:03:10.789  EAL: Heap on socket 0 was shrunk by 4MB
00:03:10.789  EAL: Trying to obtain current memory policy.
00:03:10.789  EAL: Setting policy MPOL_PREFERRED for socket 0
00:03:10.789  EAL: Restoring previous memory policy: 4
00:03:10.789  EAL: Calling mem event callback 'spdk:(nil)'
00:03:10.789  EAL: request: mp_malloc_sync
00:03:10.789  EAL: No shared files mode enabled, IPC is disabled
00:03:10.789  EAL: Heap on socket 0 was expanded by 6MB
00:03:10.789  EAL: Calling mem event callback 'spdk:(nil)'
00:03:10.789  EAL: request: mp_malloc_sync
00:03:10.789  EAL: No shared files mode enabled, IPC is disabled
00:03:10.789  EAL: Heap on socket 0 was shrunk by 6MB
00:03:10.789  EAL: Trying to obtain current memory policy.
00:03:10.789  EAL: Setting policy MPOL_PREFERRED for socket 0
00:03:10.789  EAL: Restoring previous memory policy: 4
00:03:10.789  EAL: Calling mem event callback 'spdk:(nil)'
00:03:10.789  EAL: request: mp_malloc_sync
00:03:10.789  EAL: No shared files mode enabled, IPC is disabled
00:03:10.789  EAL: Heap on socket 0 was expanded by 10MB
00:03:10.789  EAL: Calling mem event callback 'spdk:(nil)'
00:03:10.789  EAL: request: mp_malloc_sync
00:03:10.789  EAL: No shared files mode enabled, IPC is disabled
00:03:10.789  EAL: Heap on socket 0 was shrunk by 10MB
00:03:10.789  EAL: Trying to obtain current memory policy.
00:03:10.789  EAL: Setting policy MPOL_PREFERRED for socket 0
00:03:10.789  EAL: Restoring previous memory policy: 4
00:03:10.789  EAL: Calling mem event callback 'spdk:(nil)'
00:03:10.789  EAL: request: mp_malloc_sync
00:03:10.789  EAL: No shared files mode enabled, IPC is disabled
00:03:10.789  EAL: Heap on socket 0 was expanded by 18MB
00:03:10.789  EAL: Calling mem event callback 'spdk:(nil)'
00:03:10.789  EAL: request: mp_malloc_sync
00:03:10.789  EAL: No shared files mode enabled, IPC is disabled
00:03:10.789  EAL: Heap on socket 0 was shrunk by 18MB
00:03:10.789  EAL: Trying to obtain current memory policy.
00:03:10.789  EAL: Setting policy MPOL_PREFERRED for socket 0
00:03:10.789  EAL: Restoring previous memory policy: 4
00:03:10.789  EAL: Calling mem event callback 'spdk:(nil)'
00:03:10.789  EAL: request: mp_malloc_sync
00:03:10.789  EAL: No shared files mode enabled, IPC is disabled
00:03:10.789  EAL: Heap on socket 0 was expanded by 34MB
00:03:11.048  EAL: Calling mem event callback 'spdk:(nil)'
00:03:11.048  EAL: request: mp_malloc_sync
00:03:11.048  EAL: No shared files mode enabled, IPC is disabled
00:03:11.048  EAL: Heap on socket 0 was shrunk by 34MB
00:03:11.048  EAL: Trying to obtain current memory policy.
00:03:11.048  EAL: Setting policy MPOL_PREFERRED for socket 0
00:03:11.048  EAL: Restoring previous memory policy: 4
00:03:11.048  EAL: Calling mem event callback 'spdk:(nil)'
00:03:11.048  EAL: request: mp_malloc_sync
00:03:11.048  EAL: No shared files mode enabled, IPC is disabled
00:03:11.048  EAL: Heap on socket 0 was expanded by 66MB
00:03:11.048  EAL: Calling mem event callback 'spdk:(nil)'
00:03:11.048  EAL: request: mp_malloc_sync
00:03:11.048  EAL: No shared files mode enabled, IPC is disabled
00:03:11.048  EAL: Heap on socket 0 was shrunk by 66MB
00:03:11.306  EAL: Trying to obtain current memory policy.
00:03:11.306  EAL: Setting policy MPOL_PREFERRED for socket 0
00:03:11.306  EAL: Restoring previous memory policy: 4
00:03:11.306  EAL: Calling mem event callback 'spdk:(nil)'
00:03:11.306  EAL: request: mp_malloc_sync
00:03:11.306  EAL: No shared files mode enabled, IPC is disabled
00:03:11.306  EAL: Heap on socket 0 was expanded by 130MB
00:03:11.565  EAL: Calling mem event callback 'spdk:(nil)'
00:03:11.565  EAL: request: mp_malloc_sync
00:03:11.565  EAL: No shared files mode enabled, IPC is disabled
00:03:11.565  EAL: Heap on socket 0 was shrunk by 130MB
00:03:11.824  EAL: Trying to obtain current memory policy.
00:03:11.824  EAL: Setting policy MPOL_PREFERRED for socket 0
00:03:11.824  EAL: Restoring previous memory policy: 4
00:03:11.824  EAL: Calling mem event callback 'spdk:(nil)'
00:03:11.824  EAL: request: mp_malloc_sync
00:03:11.824  EAL: No shared files mode enabled, IPC is disabled
00:03:11.824  EAL: Heap on socket 0 was expanded by 258MB
00:03:12.392  EAL: Calling mem event callback 'spdk:(nil)'
00:03:12.392  EAL: request: mp_malloc_sync
00:03:12.392  EAL: No shared files mode enabled, IPC is disabled
00:03:12.392  EAL: Heap on socket 0 was shrunk by 258MB
00:03:12.959  EAL: Trying to obtain current memory policy.
00:03:12.959  EAL: Setting policy MPOL_PREFERRED for socket 0
00:03:12.959  EAL: Restoring previous memory policy: 4
00:03:12.959  EAL: Calling mem event callback 'spdk:(nil)'
00:03:12.959  EAL: request: mp_malloc_sync
00:03:12.959  EAL: No shared files mode enabled, IPC is disabled
00:03:12.959  EAL: Heap on socket 0 was expanded by 514MB
00:03:13.894  EAL: Calling mem event callback 'spdk:(nil)'
00:03:14.152  EAL: request: mp_malloc_sync
00:03:14.152  EAL: No shared files mode enabled, IPC is disabled
00:03:14.152  EAL: Heap on socket 0 was shrunk by 514MB
00:03:15.087  EAL: Trying to obtain current memory policy.
00:03:15.087  EAL: Setting policy MPOL_PREFERRED for socket 0
00:03:15.346  EAL: Restoring previous memory policy: 4
00:03:15.346  EAL: Calling mem event callback 'spdk:(nil)'
00:03:15.346  EAL: request: mp_malloc_sync
00:03:15.346  EAL: No shared files mode enabled, IPC is disabled
00:03:15.346  EAL: Heap on socket 0 was expanded by 1026MB
00:03:17.250  EAL: Calling mem event callback 'spdk:(nil)'
00:03:17.509  EAL: request: mp_malloc_sync
00:03:17.509  EAL: No shared files mode enabled, IPC is disabled
00:03:17.509  EAL: Heap on socket 0 was shrunk by 1026MB
00:03:19.410  passed
00:03:19.410  
00:03:19.410  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:19.410                suites      1      1    n/a      0        0
00:03:19.410                 tests      2      2      2      0        0
00:03:19.410               asserts    497    497    497      0      n/a
00:03:19.410  
00:03:19.410  Elapsed time =    8.912 seconds
00:03:19.410  EAL: Calling mem event callback 'spdk:(nil)'
00:03:19.410  EAL: request: mp_malloc_sync
00:03:19.410  EAL: No shared files mode enabled, IPC is disabled
00:03:19.410  EAL: Heap on socket 0 was shrunk by 2MB
00:03:19.410  EAL: No shared files mode enabled, IPC is disabled
00:03:19.410  EAL: No shared files mode enabled, IPC is disabled
00:03:19.410  EAL: No shared files mode enabled, IPC is disabled
00:03:19.410  
00:03:19.410  real	0m9.183s
00:03:19.410  user	0m8.098s
00:03:19.410  sys	0m1.020s
00:03:19.410   22:30:20 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:19.410   22:30:20 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x
00:03:19.410  ************************************
00:03:19.410  END TEST env_vtophys
00:03:19.410  ************************************
00:03:19.410   22:30:20 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/pci/pci_ut
00:03:19.410   22:30:20 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:19.410   22:30:20 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:19.410   22:30:20 env -- common/autotest_common.sh@10 -- # set +x
00:03:19.410  ************************************
00:03:19.410  START TEST env_pci
00:03:19.410  ************************************
00:03:19.410   22:30:20 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/pci/pci_ut
00:03:19.410  
00:03:19.410  
00:03:19.410       CUnit - A unit testing framework for C - Version 2.1-3
00:03:19.410       http://cunit.sourceforge.net/
00:03:19.410  
00:03:19.410  
00:03:19.410  Suite: pci
00:03:19.410    Test: pci_hook ...[2024-12-10 22:30:20.088935] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 23867 has claimed it
00:03:19.410  EAL: Cannot find device (10000:00:01.0)
00:03:19.410  EAL: Failed to attach device on primary process
00:03:19.410  passed
00:03:19.410  
00:03:19.410  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:19.410                suites      1      1    n/a      0        0
00:03:19.410                 tests      1      1      1      0        0
00:03:19.410               asserts     25     25     25      0      n/a
00:03:19.410  
00:03:19.410  Elapsed time =    0.035 seconds
00:03:19.410  
00:03:19.410  real	0m0.084s
00:03:19.410  user	0m0.042s
00:03:19.410  sys	0m0.043s
00:03:19.410   22:30:20 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:19.410   22:30:20 env.env_pci -- common/autotest_common.sh@10 -- # set +x
00:03:19.410  ************************************
00:03:19.410  END TEST env_pci
00:03:19.410  ************************************
00:03:19.410   22:30:20 env -- env/env.sh@14 -- # argv='-c 0x1 '
00:03:19.410    22:30:20 env -- env/env.sh@15 -- # uname
00:03:19.410   22:30:20 env -- env/env.sh@15 -- # '[' Linux = Linux ']'
00:03:19.410   22:30:20 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000
00:03:19.410   22:30:20 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:03:19.410   22:30:20 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:03:19.410   22:30:20 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:19.410   22:30:20 env -- common/autotest_common.sh@10 -- # set +x
00:03:19.410  ************************************
00:03:19.410  START TEST env_dpdk_post_init
00:03:19.410  ************************************
00:03:19.410   22:30:20 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:03:19.669  EAL: Detected CPU lcores: 88
00:03:19.669  EAL: Detected NUMA nodes: 2
00:03:19.669  EAL: Detected shared linkage of DPDK
00:03:19.669  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:03:19.669  EAL: Selected IOVA mode 'VA'
00:03:19.669  EAL: VFIO support initialized
00:03:19.669  TELEMETRY: No legacy callbacks, legacy socket not created
00:03:19.669  EAL: Using IOMMU type 1 (Type 1)
00:03:19.927  EAL: Ignore mapping IO port bar(1)
00:03:19.927  EAL: Probe PCI driver: spdk_ioat (8086:6f20) device: 0000:00:04.0 (socket 0)
00:03:19.927  EAL: Ignore mapping IO port bar(1)
00:03:19.927  EAL: Probe PCI driver: spdk_ioat (8086:6f21) device: 0000:00:04.1 (socket 0)
00:03:19.927  EAL: Ignore mapping IO port bar(1)
00:03:19.927  EAL: Probe PCI driver: spdk_ioat (8086:6f22) device: 0000:00:04.2 (socket 0)
00:03:19.927  EAL: Ignore mapping IO port bar(1)
00:03:19.927  EAL: Probe PCI driver: spdk_ioat (8086:6f23) device: 0000:00:04.3 (socket 0)
00:03:19.927  EAL: Ignore mapping IO port bar(1)
00:03:19.927  EAL: Probe PCI driver: spdk_ioat (8086:6f24) device: 0000:00:04.4 (socket 0)
00:03:19.927  EAL: Ignore mapping IO port bar(1)
00:03:19.927  EAL: Probe PCI driver: spdk_ioat (8086:6f25) device: 0000:00:04.5 (socket 0)
00:03:19.927  EAL: Ignore mapping IO port bar(1)
00:03:19.927  EAL: Probe PCI driver: spdk_ioat (8086:6f26) device: 0000:00:04.6 (socket 0)
00:03:19.927  EAL: Ignore mapping IO port bar(1)
00:03:19.927  EAL: Probe PCI driver: spdk_ioat (8086:6f27) device: 0000:00:04.7 (socket 0)
00:03:20.863  EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:0d:00.0 (socket 0)
00:03:20.863  EAL: Ignore mapping IO port bar(1)
00:03:20.863  EAL: Probe PCI driver: spdk_ioat (8086:6f20) device: 0000:80:04.0 (socket 1)
00:03:20.863  EAL: Ignore mapping IO port bar(1)
00:03:20.863  EAL: Probe PCI driver: spdk_ioat (8086:6f21) device: 0000:80:04.1 (socket 1)
00:03:20.863  EAL: Ignore mapping IO port bar(1)
00:03:20.863  EAL: Probe PCI driver: spdk_ioat (8086:6f22) device: 0000:80:04.2 (socket 1)
00:03:20.863  EAL: Ignore mapping IO port bar(1)
00:03:20.863  EAL: Probe PCI driver: spdk_ioat (8086:6f23) device: 0000:80:04.3 (socket 1)
00:03:20.863  EAL: Ignore mapping IO port bar(1)
00:03:20.863  EAL: Probe PCI driver: spdk_ioat (8086:6f24) device: 0000:80:04.4 (socket 1)
00:03:20.863  EAL: Ignore mapping IO port bar(1)
00:03:20.863  EAL: Probe PCI driver: spdk_ioat (8086:6f25) device: 0000:80:04.5 (socket 1)
00:03:20.863  EAL: Ignore mapping IO port bar(1)
00:03:20.863  EAL: Probe PCI driver: spdk_ioat (8086:6f26) device: 0000:80:04.6 (socket 1)
00:03:20.863  EAL: Ignore mapping IO port bar(1)
00:03:20.863  EAL: Probe PCI driver: spdk_ioat (8086:6f27) device: 0000:80:04.7 (socket 1)
00:03:24.148  EAL: Releasing PCI mapped resource for 0000:0d:00.0
00:03:24.148  EAL: Calling pci_unmap_resource for 0000:0d:00.0 at 0x202001020000
00:03:24.148  Starting DPDK initialization...
00:03:24.148  Starting SPDK post initialization...
00:03:24.148  SPDK NVMe probe
00:03:24.148  Attaching to 0000:0d:00.0
00:03:24.148  Attached to 0000:0d:00.0
00:03:24.148  Cleaning up...
00:03:24.148  
00:03:24.148  real	0m4.581s
00:03:24.148  user	0m3.128s
00:03:24.148  sys	0m0.514s
00:03:24.148   22:30:24 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:24.148   22:30:24 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x
00:03:24.148  ************************************
00:03:24.148  END TEST env_dpdk_post_init
00:03:24.148  ************************************
00:03:24.148    22:30:24 env -- env/env.sh@26 -- # uname
00:03:24.148   22:30:24 env -- env/env.sh@26 -- # '[' Linux = Linux ']'
00:03:24.148   22:30:24 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks
00:03:24.148   22:30:24 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:24.148   22:30:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:24.148   22:30:24 env -- common/autotest_common.sh@10 -- # set +x
00:03:24.148  ************************************
00:03:24.148  START TEST env_mem_callbacks
00:03:24.148  ************************************
00:03:24.148   22:30:24 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks
00:03:24.148  EAL: Detected CPU lcores: 88
00:03:24.148  EAL: Detected NUMA nodes: 2
00:03:24.148  EAL: Detected shared linkage of DPDK
00:03:24.148  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:03:24.148  EAL: Selected IOVA mode 'VA'
00:03:24.148  EAL: VFIO support initialized
00:03:24.148  TELEMETRY: No legacy callbacks, legacy socket not created
00:03:24.148  
00:03:24.148  
00:03:24.148       CUnit - A unit testing framework for C - Version 2.1-3
00:03:24.148       http://cunit.sourceforge.net/
00:03:24.148  
00:03:24.148  
00:03:24.148  Suite: memory
00:03:24.148    Test: test ...
00:03:24.148  register 0x200000200000 2097152
00:03:24.148  malloc 3145728
00:03:24.148  register 0x200000400000 4194304
00:03:24.148  buf 0x2000004fffc0 len 3145728 PASSED
00:03:24.148  malloc 64
00:03:24.148  buf 0x2000004ffec0 len 64 PASSED
00:03:24.148  malloc 4194304
00:03:24.148  register 0x200000800000 6291456
00:03:24.148  buf 0x2000009fffc0 len 4194304 PASSED
00:03:24.148  free 0x2000004fffc0 3145728
00:03:24.148  free 0x2000004ffec0 64
00:03:24.148  unregister 0x200000400000 4194304 PASSED
00:03:24.148  free 0x2000009fffc0 4194304
00:03:24.407  unregister 0x200000800000 6291456 PASSED
00:03:24.407  malloc 8388608
00:03:24.407  register 0x200000400000 10485760
00:03:24.407  buf 0x2000005fffc0 len 8388608 PASSED
00:03:24.407  free 0x2000005fffc0 8388608
00:03:24.407  unregister 0x200000400000 10485760 PASSED
00:03:24.407  passed
00:03:24.407  
00:03:24.407  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:24.407                suites      1      1    n/a      0        0
00:03:24.407                 tests      1      1      1      0        0
00:03:24.407               asserts     15     15     15      0      n/a
00:03:24.407  
00:03:24.407  Elapsed time =    0.067 seconds
00:03:24.407  
00:03:24.407  real	0m0.175s
00:03:24.407  user	0m0.098s
00:03:24.407  sys	0m0.076s
00:03:24.407   22:30:24 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:24.407   22:30:24 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x
00:03:24.407  ************************************
00:03:24.407  END TEST env_mem_callbacks
00:03:24.407  ************************************
00:03:24.407  
00:03:24.407  real	0m14.638s
00:03:24.407  user	0m11.773s
00:03:24.407  sys	0m1.878s
00:03:24.407   22:30:25 env -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:24.407   22:30:25 env -- common/autotest_common.sh@10 -- # set +x
00:03:24.407  ************************************
00:03:24.407  END TEST env
00:03:24.407  ************************************
00:03:24.407   22:30:25  -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/rpc.sh
00:03:24.407   22:30:25  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:24.407   22:30:25  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:24.407   22:30:25  -- common/autotest_common.sh@10 -- # set +x
00:03:24.407  ************************************
00:03:24.407  START TEST rpc
00:03:24.407  ************************************
00:03:24.407   22:30:25 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/rpc.sh
00:03:24.407  * Looking for test storage...
00:03:24.407  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc
00:03:24.407    22:30:25 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:03:24.407     22:30:25 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:03:24.407     22:30:25 rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:03:24.407    22:30:25 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:03:24.407    22:30:25 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:03:24.407    22:30:25 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:03:24.407    22:30:25 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:03:24.407    22:30:25 rpc -- scripts/common.sh@336 -- # IFS=.-:
00:03:24.407    22:30:25 rpc -- scripts/common.sh@336 -- # read -ra ver1
00:03:24.407    22:30:25 rpc -- scripts/common.sh@337 -- # IFS=.-:
00:03:24.407    22:30:25 rpc -- scripts/common.sh@337 -- # read -ra ver2
00:03:24.407    22:30:25 rpc -- scripts/common.sh@338 -- # local 'op=<'
00:03:24.407    22:30:25 rpc -- scripts/common.sh@340 -- # ver1_l=2
00:03:24.407    22:30:25 rpc -- scripts/common.sh@341 -- # ver2_l=1
00:03:24.407    22:30:25 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:03:24.407    22:30:25 rpc -- scripts/common.sh@344 -- # case "$op" in
00:03:24.407    22:30:25 rpc -- scripts/common.sh@345 -- # : 1
00:03:24.407    22:30:25 rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:03:24.407    22:30:25 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:03:24.407     22:30:25 rpc -- scripts/common.sh@365 -- # decimal 1
00:03:24.407     22:30:25 rpc -- scripts/common.sh@353 -- # local d=1
00:03:24.407     22:30:25 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:03:24.407     22:30:25 rpc -- scripts/common.sh@355 -- # echo 1
00:03:24.407    22:30:25 rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:03:24.407     22:30:25 rpc -- scripts/common.sh@366 -- # decimal 2
00:03:24.407     22:30:25 rpc -- scripts/common.sh@353 -- # local d=2
00:03:24.407     22:30:25 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:03:24.407     22:30:25 rpc -- scripts/common.sh@355 -- # echo 2
00:03:24.407    22:30:25 rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:03:24.407    22:30:25 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:03:24.407    22:30:25 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:03:24.407    22:30:25 rpc -- scripts/common.sh@368 -- # return 0
00:03:24.407    22:30:25 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:03:24.407    22:30:25 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:03:24.407  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:24.407  		--rc genhtml_branch_coverage=1
00:03:24.407  		--rc genhtml_function_coverage=1
00:03:24.407  		--rc genhtml_legend=1
00:03:24.407  		--rc geninfo_all_blocks=1
00:03:24.407  		--rc geninfo_unexecuted_blocks=1
00:03:24.407  		
00:03:24.407  		'
00:03:24.407    22:30:25 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:03:24.407  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:24.407  		--rc genhtml_branch_coverage=1
00:03:24.407  		--rc genhtml_function_coverage=1
00:03:24.407  		--rc genhtml_legend=1
00:03:24.407  		--rc geninfo_all_blocks=1
00:03:24.407  		--rc geninfo_unexecuted_blocks=1
00:03:24.407  		
00:03:24.407  		'
00:03:24.407    22:30:25 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:03:24.407  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:24.407  		--rc genhtml_branch_coverage=1
00:03:24.407  		--rc genhtml_function_coverage=1
00:03:24.407  		--rc genhtml_legend=1
00:03:24.407  		--rc geninfo_all_blocks=1
00:03:24.407  		--rc geninfo_unexecuted_blocks=1
00:03:24.407  		
00:03:24.407  		'
00:03:24.407    22:30:25 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:03:24.407  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:24.407  		--rc genhtml_branch_coverage=1
00:03:24.407  		--rc genhtml_function_coverage=1
00:03:24.407  		--rc genhtml_legend=1
00:03:24.407  		--rc geninfo_all_blocks=1
00:03:24.407  		--rc geninfo_unexecuted_blocks=1
00:03:24.407  		
00:03:24.407  		'
00:03:24.665   22:30:25 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -e bdev
00:03:24.665   22:30:25 rpc -- rpc/rpc.sh@65 -- # spdk_pid=24830
00:03:24.665   22:30:25 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:03:24.665   22:30:25 rpc -- rpc/rpc.sh@67 -- # waitforlisten 24830
00:03:24.665   22:30:25 rpc -- common/autotest_common.sh@835 -- # '[' -z 24830 ']'
00:03:24.665   22:30:25 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:03:24.665   22:30:25 rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:03:24.665   22:30:25 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:03:24.665  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:03:24.665   22:30:25 rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:03:24.665   22:30:25 rpc -- common/autotest_common.sh@10 -- # set +x
00:03:24.665  [2024-12-10 22:30:25.285149] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:03:24.665  [2024-12-10 22:30:25.285265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid24830 ]
00:03:24.665  [2024-12-10 22:30:25.414379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:03:24.923  [2024-12-10 22:30:25.551568] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified.
00:03:24.923  [2024-12-10 22:30:25.551634] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 24830' to capture a snapshot of events at runtime.
00:03:24.923  [2024-12-10 22:30:25.551657] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:03:24.923  [2024-12-10 22:30:25.551674] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:03:24.923  [2024-12-10 22:30:25.551691] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid24830 for offline analysis/debug.
00:03:24.923  [2024-12-10 22:30:25.553270] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:03:25.859   22:30:26 rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:03:25.859   22:30:26 rpc -- common/autotest_common.sh@868 -- # return 0
00:03:25.859   22:30:26 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc
00:03:25.859   22:30:26 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc
00:03:25.859   22:30:26 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd
00:03:25.859   22:30:26 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity
00:03:25.859   22:30:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:25.859   22:30:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:25.859   22:30:26 rpc -- common/autotest_common.sh@10 -- # set +x
00:03:25.859  ************************************
00:03:25.859  START TEST rpc_integrity
00:03:25.859  ************************************
00:03:25.859   22:30:26 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity
00:03:25.859    22:30:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:03:25.859    22:30:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:25.859    22:30:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:03:25.859    22:30:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:25.859   22:30:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]'
00:03:25.859    22:30:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length
00:03:25.859   22:30:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:03:25.859    22:30:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:03:25.859    22:30:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:25.859    22:30:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:03:26.118    22:30:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:26.119   22:30:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0
00:03:26.119    22:30:26 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:03:26.119    22:30:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:26.119    22:30:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:03:26.119    22:30:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:26.119   22:30:26 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[
00:03:26.119  {
00:03:26.119  "name": "Malloc0",
00:03:26.119  "aliases": [
00:03:26.119  "0d1d602e-9c5c-439e-9568-00d804330543"
00:03:26.119  ],
00:03:26.119  "product_name": "Malloc disk",
00:03:26.119  "block_size": 512,
00:03:26.119  "num_blocks": 16384,
00:03:26.119  "uuid": "0d1d602e-9c5c-439e-9568-00d804330543",
00:03:26.119  "assigned_rate_limits": {
00:03:26.119  "rw_ios_per_sec": 0,
00:03:26.119  "rw_mbytes_per_sec": 0,
00:03:26.119  "r_mbytes_per_sec": 0,
00:03:26.119  "w_mbytes_per_sec": 0
00:03:26.119  },
00:03:26.119  "claimed": false,
00:03:26.119  "zoned": false,
00:03:26.119  "supported_io_types": {
00:03:26.119  "read": true,
00:03:26.119  "write": true,
00:03:26.119  "unmap": true,
00:03:26.119  "flush": true,
00:03:26.119  "reset": true,
00:03:26.119  "nvme_admin": false,
00:03:26.119  "nvme_io": false,
00:03:26.119  "nvme_io_md": false,
00:03:26.119  "write_zeroes": true,
00:03:26.119  "zcopy": true,
00:03:26.119  "get_zone_info": false,
00:03:26.119  "zone_management": false,
00:03:26.119  "zone_append": false,
00:03:26.119  "compare": false,
00:03:26.119  "compare_and_write": false,
00:03:26.119  "abort": true,
00:03:26.119  "seek_hole": false,
00:03:26.119  "seek_data": false,
00:03:26.119  "copy": true,
00:03:26.119  "nvme_iov_md": false
00:03:26.119  },
00:03:26.119  "memory_domains": [
00:03:26.119  {
00:03:26.119  "dma_device_id": "system",
00:03:26.119  "dma_device_type": 1
00:03:26.119  },
00:03:26.119  {
00:03:26.119  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:03:26.119  "dma_device_type": 2
00:03:26.119  }
00:03:26.119  ],
00:03:26.119  "driver_specific": {}
00:03:26.119  }
00:03:26.119  ]'
00:03:26.119    22:30:26 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length
00:03:26.119   22:30:26 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:03:26.119   22:30:26 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0
00:03:26.119   22:30:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:26.119   22:30:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:03:26.119  [2024-12-10 22:30:26.713073] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0
00:03:26.119  [2024-12-10 22:30:26.713132] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:03:26.119  [2024-12-10 22:30:26.713178] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600001c580
00:03:26.119  [2024-12-10 22:30:26.713198] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed
00:03:26.119  [2024-12-10 22:30:26.716195] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:03:26.119  [2024-12-10 22:30:26.716233] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:03:26.119  Passthru0
00:03:26.119   22:30:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:26.119    22:30:26 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:03:26.119    22:30:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:26.119    22:30:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:03:26.119    22:30:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:26.119   22:30:26 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[
00:03:26.119  {
00:03:26.119  "name": "Malloc0",
00:03:26.119  "aliases": [
00:03:26.119  "0d1d602e-9c5c-439e-9568-00d804330543"
00:03:26.119  ],
00:03:26.119  "product_name": "Malloc disk",
00:03:26.119  "block_size": 512,
00:03:26.119  "num_blocks": 16384,
00:03:26.119  "uuid": "0d1d602e-9c5c-439e-9568-00d804330543",
00:03:26.119  "assigned_rate_limits": {
00:03:26.119  "rw_ios_per_sec": 0,
00:03:26.119  "rw_mbytes_per_sec": 0,
00:03:26.119  "r_mbytes_per_sec": 0,
00:03:26.119  "w_mbytes_per_sec": 0
00:03:26.119  },
00:03:26.119  "claimed": true,
00:03:26.119  "claim_type": "exclusive_write",
00:03:26.119  "zoned": false,
00:03:26.119  "supported_io_types": {
00:03:26.119  "read": true,
00:03:26.119  "write": true,
00:03:26.119  "unmap": true,
00:03:26.119  "flush": true,
00:03:26.119  "reset": true,
00:03:26.119  "nvme_admin": false,
00:03:26.119  "nvme_io": false,
00:03:26.119  "nvme_io_md": false,
00:03:26.119  "write_zeroes": true,
00:03:26.119  "zcopy": true,
00:03:26.119  "get_zone_info": false,
00:03:26.119  "zone_management": false,
00:03:26.119  "zone_append": false,
00:03:26.119  "compare": false,
00:03:26.119  "compare_and_write": false,
00:03:26.119  "abort": true,
00:03:26.119  "seek_hole": false,
00:03:26.119  "seek_data": false,
00:03:26.119  "copy": true,
00:03:26.119  "nvme_iov_md": false
00:03:26.119  },
00:03:26.119  "memory_domains": [
00:03:26.119  {
00:03:26.119  "dma_device_id": "system",
00:03:26.119  "dma_device_type": 1
00:03:26.119  },
00:03:26.119  {
00:03:26.119  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:03:26.119  "dma_device_type": 2
00:03:26.119  }
00:03:26.119  ],
00:03:26.119  "driver_specific": {}
00:03:26.119  },
00:03:26.119  {
00:03:26.119  "name": "Passthru0",
00:03:26.119  "aliases": [
00:03:26.119  "0681a811-c3cb-5a6f-aee4-e8930097424e"
00:03:26.119  ],
00:03:26.119  "product_name": "passthru",
00:03:26.119  "block_size": 512,
00:03:26.119  "num_blocks": 16384,
00:03:26.119  "uuid": "0681a811-c3cb-5a6f-aee4-e8930097424e",
00:03:26.119  "assigned_rate_limits": {
00:03:26.119  "rw_ios_per_sec": 0,
00:03:26.119  "rw_mbytes_per_sec": 0,
00:03:26.119  "r_mbytes_per_sec": 0,
00:03:26.119  "w_mbytes_per_sec": 0
00:03:26.119  },
00:03:26.119  "claimed": false,
00:03:26.119  "zoned": false,
00:03:26.119  "supported_io_types": {
00:03:26.119  "read": true,
00:03:26.119  "write": true,
00:03:26.119  "unmap": true,
00:03:26.119  "flush": true,
00:03:26.119  "reset": true,
00:03:26.119  "nvme_admin": false,
00:03:26.119  "nvme_io": false,
00:03:26.119  "nvme_io_md": false,
00:03:26.119  "write_zeroes": true,
00:03:26.119  "zcopy": true,
00:03:26.119  "get_zone_info": false,
00:03:26.119  "zone_management": false,
00:03:26.119  "zone_append": false,
00:03:26.119  "compare": false,
00:03:26.119  "compare_and_write": false,
00:03:26.119  "abort": true,
00:03:26.119  "seek_hole": false,
00:03:26.119  "seek_data": false,
00:03:26.119  "copy": true,
00:03:26.119  "nvme_iov_md": false
00:03:26.119  },
00:03:26.119  "memory_domains": [
00:03:26.119  {
00:03:26.119  "dma_device_id": "system",
00:03:26.119  "dma_device_type": 1
00:03:26.119  },
00:03:26.119  {
00:03:26.119  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:03:26.119  "dma_device_type": 2
00:03:26.119  }
00:03:26.119  ],
00:03:26.119  "driver_specific": {
00:03:26.119  "passthru": {
00:03:26.119  "name": "Passthru0",
00:03:26.119  "base_bdev_name": "Malloc0"
00:03:26.119  }
00:03:26.119  }
00:03:26.119  }
00:03:26.119  ]'
00:03:26.119    22:30:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length
00:03:26.119   22:30:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:03:26.119   22:30:26 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:03:26.119   22:30:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:26.119   22:30:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:03:26.119   22:30:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:26.119   22:30:26 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0
00:03:26.119   22:30:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:26.119   22:30:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:03:26.119   22:30:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:26.119    22:30:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:03:26.119    22:30:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:26.119    22:30:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:03:26.119    22:30:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:26.119   22:30:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]'
00:03:26.119    22:30:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length
00:03:26.119   22:30:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:03:26.119  
00:03:26.119  real	0m0.261s
00:03:26.119  user	0m0.148s
00:03:26.119  sys	0m0.024s
00:03:26.119   22:30:26 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:26.119   22:30:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:03:26.119  ************************************
00:03:26.119  END TEST rpc_integrity
00:03:26.119  ************************************
00:03:26.119   22:30:26 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins
00:03:26.119   22:30:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:26.119   22:30:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:26.119   22:30:26 rpc -- common/autotest_common.sh@10 -- # set +x
00:03:26.119  ************************************
00:03:26.119  START TEST rpc_plugins
00:03:26.119  ************************************
00:03:26.119   22:30:26 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins
00:03:26.119    22:30:26 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc
00:03:26.119    22:30:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:26.119    22:30:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:03:26.378    22:30:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:26.378   22:30:26 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1
00:03:26.378    22:30:26 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs
00:03:26.378    22:30:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:26.378    22:30:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:03:26.378    22:30:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:26.378   22:30:26 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[
00:03:26.378  {
00:03:26.378  "name": "Malloc1",
00:03:26.378  "aliases": [
00:03:26.378  "fdbd0c92-8b87-452e-bbc8-e31788976c7a"
00:03:26.378  ],
00:03:26.378  "product_name": "Malloc disk",
00:03:26.378  "block_size": 4096,
00:03:26.378  "num_blocks": 256,
00:03:26.378  "uuid": "fdbd0c92-8b87-452e-bbc8-e31788976c7a",
00:03:26.378  "assigned_rate_limits": {
00:03:26.378  "rw_ios_per_sec": 0,
00:03:26.378  "rw_mbytes_per_sec": 0,
00:03:26.379  "r_mbytes_per_sec": 0,
00:03:26.379  "w_mbytes_per_sec": 0
00:03:26.379  },
00:03:26.379  "claimed": false,
00:03:26.379  "zoned": false,
00:03:26.379  "supported_io_types": {
00:03:26.379  "read": true,
00:03:26.379  "write": true,
00:03:26.379  "unmap": true,
00:03:26.379  "flush": true,
00:03:26.379  "reset": true,
00:03:26.379  "nvme_admin": false,
00:03:26.379  "nvme_io": false,
00:03:26.379  "nvme_io_md": false,
00:03:26.379  "write_zeroes": true,
00:03:26.379  "zcopy": true,
00:03:26.379  "get_zone_info": false,
00:03:26.379  "zone_management": false,
00:03:26.379  "zone_append": false,
00:03:26.379  "compare": false,
00:03:26.379  "compare_and_write": false,
00:03:26.379  "abort": true,
00:03:26.379  "seek_hole": false,
00:03:26.379  "seek_data": false,
00:03:26.379  "copy": true,
00:03:26.379  "nvme_iov_md": false
00:03:26.379  },
00:03:26.379  "memory_domains": [
00:03:26.379  {
00:03:26.379  "dma_device_id": "system",
00:03:26.379  "dma_device_type": 1
00:03:26.379  },
00:03:26.379  {
00:03:26.379  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:03:26.379  "dma_device_type": 2
00:03:26.379  }
00:03:26.379  ],
00:03:26.379  "driver_specific": {}
00:03:26.379  }
00:03:26.379  ]'
00:03:26.379    22:30:26 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length
00:03:26.379   22:30:26 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']'
00:03:26.379   22:30:26 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1
00:03:26.379   22:30:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:26.379   22:30:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:03:26.379   22:30:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:26.379    22:30:26 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs
00:03:26.379    22:30:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:26.379    22:30:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:03:26.379    22:30:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:26.379   22:30:26 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]'
00:03:26.379    22:30:26 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length
00:03:26.379   22:30:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']'
00:03:26.379  
00:03:26.379  real	0m0.131s
00:03:26.379  user	0m0.081s
00:03:26.379  sys	0m0.012s
00:03:26.379   22:30:27 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:26.379   22:30:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:03:26.379  ************************************
00:03:26.379  END TEST rpc_plugins
00:03:26.379  ************************************
00:03:26.379   22:30:27 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test
00:03:26.379   22:30:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:26.379   22:30:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:26.379   22:30:27 rpc -- common/autotest_common.sh@10 -- # set +x
00:03:26.379  ************************************
00:03:26.379  START TEST rpc_trace_cmd_test
00:03:26.379  ************************************
00:03:26.379   22:30:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test
00:03:26.379   22:30:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info
00:03:26.379    22:30:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info
00:03:26.379    22:30:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:26.379    22:30:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x
00:03:26.379    22:30:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:26.379   22:30:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{
00:03:26.379  "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid24830",
00:03:26.379  "tpoint_group_mask": "0x8",
00:03:26.379  "iscsi_conn": {
00:03:26.379  "mask": "0x2",
00:03:26.379  "tpoint_mask": "0x0"
00:03:26.379  },
00:03:26.379  "scsi": {
00:03:26.379  "mask": "0x4",
00:03:26.379  "tpoint_mask": "0x0"
00:03:26.379  },
00:03:26.379  "bdev": {
00:03:26.379  "mask": "0x8",
00:03:26.379  "tpoint_mask": "0xffffffffffffffff"
00:03:26.379  },
00:03:26.379  "nvmf_rdma": {
00:03:26.379  "mask": "0x10",
00:03:26.379  "tpoint_mask": "0x0"
00:03:26.379  },
00:03:26.379  "nvmf_tcp": {
00:03:26.379  "mask": "0x20",
00:03:26.379  "tpoint_mask": "0x0"
00:03:26.379  },
00:03:26.379  "ftl": {
00:03:26.379  "mask": "0x40",
00:03:26.379  "tpoint_mask": "0x0"
00:03:26.379  },
00:03:26.379  "blobfs": {
00:03:26.379  "mask": "0x80",
00:03:26.379  "tpoint_mask": "0x0"
00:03:26.379  },
00:03:26.379  "dsa": {
00:03:26.379  "mask": "0x200",
00:03:26.379  "tpoint_mask": "0x0"
00:03:26.379  },
00:03:26.379  "thread": {
00:03:26.379  "mask": "0x400",
00:03:26.379  "tpoint_mask": "0x0"
00:03:26.379  },
00:03:26.379  "nvme_pcie": {
00:03:26.379  "mask": "0x800",
00:03:26.379  "tpoint_mask": "0x0"
00:03:26.379  },
00:03:26.379  "iaa": {
00:03:26.379  "mask": "0x1000",
00:03:26.379  "tpoint_mask": "0x0"
00:03:26.379  },
00:03:26.379  "nvme_tcp": {
00:03:26.379  "mask": "0x2000",
00:03:26.379  "tpoint_mask": "0x0"
00:03:26.379  },
00:03:26.379  "bdev_nvme": {
00:03:26.379  "mask": "0x4000",
00:03:26.379  "tpoint_mask": "0x0"
00:03:26.379  },
00:03:26.379  "sock": {
00:03:26.379  "mask": "0x8000",
00:03:26.379  "tpoint_mask": "0x0"
00:03:26.379  },
00:03:26.379  "blob": {
00:03:26.379  "mask": "0x10000",
00:03:26.379  "tpoint_mask": "0x0"
00:03:26.379  },
00:03:26.379  "bdev_raid": {
00:03:26.379  "mask": "0x20000",
00:03:26.379  "tpoint_mask": "0x0"
00:03:26.379  },
00:03:26.379  "scheduler": {
00:03:26.379  "mask": "0x40000",
00:03:26.379  "tpoint_mask": "0x0"
00:03:26.379  }
00:03:26.379  }'
00:03:26.379    22:30:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length
00:03:26.379   22:30:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']'
00:03:26.379    22:30:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")'
00:03:26.638   22:30:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']'
00:03:26.638    22:30:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")'
00:03:26.638   22:30:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']'
00:03:26.638    22:30:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")'
00:03:26.638   22:30:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']'
00:03:26.638    22:30:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask
00:03:26.638   22:30:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']'
00:03:26.638  
00:03:26.638  real	0m0.197s
00:03:26.638  user	0m0.175s
00:03:26.638  sys	0m0.013s
00:03:26.638   22:30:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:26.638   22:30:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x
00:03:26.638  ************************************
00:03:26.638  END TEST rpc_trace_cmd_test
00:03:26.638  ************************************
00:03:26.638   22:30:27 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]]
00:03:26.638   22:30:27 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd
00:03:26.638   22:30:27 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity
00:03:26.638   22:30:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:26.638   22:30:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:26.638   22:30:27 rpc -- common/autotest_common.sh@10 -- # set +x
00:03:26.638  ************************************
00:03:26.638  START TEST rpc_daemon_integrity
00:03:26.638  ************************************
00:03:26.638   22:30:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity
00:03:26.638    22:30:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:03:26.638    22:30:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:26.638    22:30:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:03:26.638    22:30:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:26.638   22:30:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]'
00:03:26.638    22:30:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length
00:03:26.638   22:30:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:03:26.638    22:30:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:03:26.638    22:30:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:26.638    22:30:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:03:26.638    22:30:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:26.638   22:30:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2
00:03:26.638    22:30:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:03:26.638    22:30:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:26.638    22:30:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:03:26.638    22:30:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:26.638   22:30:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[
00:03:26.638  {
00:03:26.638  "name": "Malloc2",
00:03:26.638  "aliases": [
00:03:26.638  "a6869532-9f31-4b29-a85b-93fcf1e2839a"
00:03:26.638  ],
00:03:26.638  "product_name": "Malloc disk",
00:03:26.638  "block_size": 512,
00:03:26.638  "num_blocks": 16384,
00:03:26.638  "uuid": "a6869532-9f31-4b29-a85b-93fcf1e2839a",
00:03:26.638  "assigned_rate_limits": {
00:03:26.638  "rw_ios_per_sec": 0,
00:03:26.638  "rw_mbytes_per_sec": 0,
00:03:26.638  "r_mbytes_per_sec": 0,
00:03:26.638  "w_mbytes_per_sec": 0
00:03:26.638  },
00:03:26.638  "claimed": false,
00:03:26.638  "zoned": false,
00:03:26.638  "supported_io_types": {
00:03:26.638  "read": true,
00:03:26.638  "write": true,
00:03:26.638  "unmap": true,
00:03:26.638  "flush": true,
00:03:26.638  "reset": true,
00:03:26.638  "nvme_admin": false,
00:03:26.638  "nvme_io": false,
00:03:26.638  "nvme_io_md": false,
00:03:26.638  "write_zeroes": true,
00:03:26.638  "zcopy": true,
00:03:26.638  "get_zone_info": false,
00:03:26.638  "zone_management": false,
00:03:26.638  "zone_append": false,
00:03:26.638  "compare": false,
00:03:26.638  "compare_and_write": false,
00:03:26.638  "abort": true,
00:03:26.638  "seek_hole": false,
00:03:26.638  "seek_data": false,
00:03:26.638  "copy": true,
00:03:26.638  "nvme_iov_md": false
00:03:26.638  },
00:03:26.638  "memory_domains": [
00:03:26.638  {
00:03:26.638  "dma_device_id": "system",
00:03:26.638  "dma_device_type": 1
00:03:26.638  },
00:03:26.638  {
00:03:26.638  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:03:26.638  "dma_device_type": 2
00:03:26.638  }
00:03:26.638  ],
00:03:26.638  "driver_specific": {}
00:03:26.638  }
00:03:26.638  ]'
00:03:26.638    22:30:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length
00:03:26.897   22:30:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:03:26.897   22:30:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0
00:03:26.897   22:30:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:26.897   22:30:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:03:26.897  [2024-12-10 22:30:27.451453] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2
00:03:26.897  [2024-12-10 22:30:27.451506] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:03:26.897  [2024-12-10 22:30:27.451541] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600001d780
00:03:26.897  [2024-12-10 22:30:27.451560] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed
00:03:26.897  [2024-12-10 22:30:27.454538] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:03:26.897  [2024-12-10 22:30:27.454574] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:03:26.897  Passthru0
00:03:26.897   22:30:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:26.897    22:30:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:03:26.897    22:30:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:26.897    22:30:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:03:26.897    22:30:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:26.897   22:30:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[
00:03:26.897  {
00:03:26.897  "name": "Malloc2",
00:03:26.897  "aliases": [
00:03:26.897  "a6869532-9f31-4b29-a85b-93fcf1e2839a"
00:03:26.897  ],
00:03:26.897  "product_name": "Malloc disk",
00:03:26.897  "block_size": 512,
00:03:26.897  "num_blocks": 16384,
00:03:26.897  "uuid": "a6869532-9f31-4b29-a85b-93fcf1e2839a",
00:03:26.897  "assigned_rate_limits": {
00:03:26.897  "rw_ios_per_sec": 0,
00:03:26.897  "rw_mbytes_per_sec": 0,
00:03:26.897  "r_mbytes_per_sec": 0,
00:03:26.897  "w_mbytes_per_sec": 0
00:03:26.897  },
00:03:26.897  "claimed": true,
00:03:26.897  "claim_type": "exclusive_write",
00:03:26.897  "zoned": false,
00:03:26.897  "supported_io_types": {
00:03:26.897  "read": true,
00:03:26.897  "write": true,
00:03:26.897  "unmap": true,
00:03:26.897  "flush": true,
00:03:26.897  "reset": true,
00:03:26.897  "nvme_admin": false,
00:03:26.897  "nvme_io": false,
00:03:26.897  "nvme_io_md": false,
00:03:26.897  "write_zeroes": true,
00:03:26.897  "zcopy": true,
00:03:26.897  "get_zone_info": false,
00:03:26.897  "zone_management": false,
00:03:26.897  "zone_append": false,
00:03:26.897  "compare": false,
00:03:26.897  "compare_and_write": false,
00:03:26.897  "abort": true,
00:03:26.897  "seek_hole": false,
00:03:26.897  "seek_data": false,
00:03:26.897  "copy": true,
00:03:26.897  "nvme_iov_md": false
00:03:26.897  },
00:03:26.897  "memory_domains": [
00:03:26.897  {
00:03:26.897  "dma_device_id": "system",
00:03:26.897  "dma_device_type": 1
00:03:26.897  },
00:03:26.897  {
00:03:26.897  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:03:26.897  "dma_device_type": 2
00:03:26.897  }
00:03:26.897  ],
00:03:26.897  "driver_specific": {}
00:03:26.897  },
00:03:26.897  {
00:03:26.897  "name": "Passthru0",
00:03:26.897  "aliases": [
00:03:26.897  "1cad4a48-119a-5271-8f0c-8252dc18b6dd"
00:03:26.897  ],
00:03:26.897  "product_name": "passthru",
00:03:26.897  "block_size": 512,
00:03:26.897  "num_blocks": 16384,
00:03:26.897  "uuid": "1cad4a48-119a-5271-8f0c-8252dc18b6dd",
00:03:26.897  "assigned_rate_limits": {
00:03:26.897  "rw_ios_per_sec": 0,
00:03:26.897  "rw_mbytes_per_sec": 0,
00:03:26.897  "r_mbytes_per_sec": 0,
00:03:26.897  "w_mbytes_per_sec": 0
00:03:26.897  },
00:03:26.897  "claimed": false,
00:03:26.897  "zoned": false,
00:03:26.897  "supported_io_types": {
00:03:26.897  "read": true,
00:03:26.897  "write": true,
00:03:26.897  "unmap": true,
00:03:26.897  "flush": true,
00:03:26.897  "reset": true,
00:03:26.897  "nvme_admin": false,
00:03:26.897  "nvme_io": false,
00:03:26.897  "nvme_io_md": false,
00:03:26.897  "write_zeroes": true,
00:03:26.897  "zcopy": true,
00:03:26.897  "get_zone_info": false,
00:03:26.897  "zone_management": false,
00:03:26.897  "zone_append": false,
00:03:26.897  "compare": false,
00:03:26.897  "compare_and_write": false,
00:03:26.897  "abort": true,
00:03:26.897  "seek_hole": false,
00:03:26.897  "seek_data": false,
00:03:26.897  "copy": true,
00:03:26.897  "nvme_iov_md": false
00:03:26.897  },
00:03:26.897  "memory_domains": [
00:03:26.897  {
00:03:26.897  "dma_device_id": "system",
00:03:26.897  "dma_device_type": 1
00:03:26.897  },
00:03:26.897  {
00:03:26.897  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:03:26.897  "dma_device_type": 2
00:03:26.897  }
00:03:26.897  ],
00:03:26.897  "driver_specific": {
00:03:26.897  "passthru": {
00:03:26.897  "name": "Passthru0",
00:03:26.897  "base_bdev_name": "Malloc2"
00:03:26.897  }
00:03:26.897  }
00:03:26.897  }
00:03:26.897  ]'
00:03:26.897    22:30:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length
00:03:26.897   22:30:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:03:26.897   22:30:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:03:26.897   22:30:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:26.897   22:30:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:03:26.897   22:30:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:26.897   22:30:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2
00:03:26.897   22:30:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:26.897   22:30:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:03:26.897   22:30:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:26.897    22:30:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:03:26.897    22:30:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:26.897    22:30:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:03:26.897    22:30:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:26.897   22:30:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]'
00:03:26.897    22:30:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length
00:03:26.897   22:30:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:03:26.897  
00:03:26.897  real	0m0.281s
00:03:26.897  user	0m0.174s
00:03:26.897  sys	0m0.020s
00:03:26.897   22:30:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:26.897   22:30:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:03:26.897  ************************************
00:03:26.897  END TEST rpc_daemon_integrity
00:03:26.897  ************************************
00:03:26.897   22:30:27 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT
00:03:26.897   22:30:27 rpc -- rpc/rpc.sh@84 -- # killprocess 24830
00:03:26.897   22:30:27 rpc -- common/autotest_common.sh@954 -- # '[' -z 24830 ']'
00:03:26.897   22:30:27 rpc -- common/autotest_common.sh@958 -- # kill -0 24830
00:03:26.897    22:30:27 rpc -- common/autotest_common.sh@959 -- # uname
00:03:26.897   22:30:27 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:03:26.897    22:30:27 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 24830
00:03:26.897   22:30:27 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:03:26.897   22:30:27 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:03:26.897   22:30:27 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 24830'
00:03:26.897  killing process with pid 24830
00:03:26.897   22:30:27 rpc -- common/autotest_common.sh@973 -- # kill 24830
00:03:26.898   22:30:27 rpc -- common/autotest_common.sh@978 -- # wait 24830
00:03:30.185  
00:03:30.185  real	0m5.231s
00:03:30.185  user	0m5.676s
00:03:30.185  sys	0m0.808s
00:03:30.185   22:30:30 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:30.185   22:30:30 rpc -- common/autotest_common.sh@10 -- # set +x
00:03:30.185  ************************************
00:03:30.185  END TEST rpc
00:03:30.185  ************************************
00:03:30.185   22:30:30  -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/skip_rpc.sh
00:03:30.185   22:30:30  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:30.185   22:30:30  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:30.185   22:30:30  -- common/autotest_common.sh@10 -- # set +x
00:03:30.185  ************************************
00:03:30.185  START TEST skip_rpc
00:03:30.185  ************************************
00:03:30.185   22:30:30 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/skip_rpc.sh
00:03:30.185  * Looking for test storage...
00:03:30.185  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc
00:03:30.185    22:30:30 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:03:30.186     22:30:30 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:03:30.186     22:30:30 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:03:30.186    22:30:30 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:03:30.186    22:30:30 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:03:30.186    22:30:30 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:03:30.186    22:30:30 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:03:30.186    22:30:30 skip_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:03:30.186    22:30:30 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:03:30.186    22:30:30 skip_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:03:30.186    22:30:30 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:03:30.186    22:30:30 skip_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:03:30.186    22:30:30 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:03:30.186    22:30:30 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:03:30.186    22:30:30 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:03:30.186    22:30:30 skip_rpc -- scripts/common.sh@344 -- # case "$op" in
00:03:30.186    22:30:30 skip_rpc -- scripts/common.sh@345 -- # : 1
00:03:30.186    22:30:30 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:03:30.186    22:30:30 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:03:30.186     22:30:30 skip_rpc -- scripts/common.sh@365 -- # decimal 1
00:03:30.186     22:30:30 skip_rpc -- scripts/common.sh@353 -- # local d=1
00:03:30.186     22:30:30 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:03:30.186     22:30:30 skip_rpc -- scripts/common.sh@355 -- # echo 1
00:03:30.186    22:30:30 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:03:30.186     22:30:30 skip_rpc -- scripts/common.sh@366 -- # decimal 2
00:03:30.186     22:30:30 skip_rpc -- scripts/common.sh@353 -- # local d=2
00:03:30.186     22:30:30 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:03:30.186     22:30:30 skip_rpc -- scripts/common.sh@355 -- # echo 2
00:03:30.186    22:30:30 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:03:30.186    22:30:30 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:03:30.186    22:30:30 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:03:30.186    22:30:30 skip_rpc -- scripts/common.sh@368 -- # return 0
00:03:30.186    22:30:30 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:03:30.186    22:30:30 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:03:30.186  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:30.186  		--rc genhtml_branch_coverage=1
00:03:30.186  		--rc genhtml_function_coverage=1
00:03:30.186  		--rc genhtml_legend=1
00:03:30.186  		--rc geninfo_all_blocks=1
00:03:30.186  		--rc geninfo_unexecuted_blocks=1
00:03:30.186  		
00:03:30.186  		'
00:03:30.186    22:30:30 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:03:30.186  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:30.186  		--rc genhtml_branch_coverage=1
00:03:30.186  		--rc genhtml_function_coverage=1
00:03:30.186  		--rc genhtml_legend=1
00:03:30.186  		--rc geninfo_all_blocks=1
00:03:30.186  		--rc geninfo_unexecuted_blocks=1
00:03:30.186  		
00:03:30.186  		'
00:03:30.186    22:30:30 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:03:30.186  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:30.186  		--rc genhtml_branch_coverage=1
00:03:30.186  		--rc genhtml_function_coverage=1
00:03:30.186  		--rc genhtml_legend=1
00:03:30.186  		--rc geninfo_all_blocks=1
00:03:30.186  		--rc geninfo_unexecuted_blocks=1
00:03:30.186  		
00:03:30.186  		'
00:03:30.186    22:30:30 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:03:30.186  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:30.186  		--rc genhtml_branch_coverage=1
00:03:30.186  		--rc genhtml_function_coverage=1
00:03:30.186  		--rc genhtml_legend=1
00:03:30.186  		--rc geninfo_all_blocks=1
00:03:30.186  		--rc geninfo_unexecuted_blocks=1
00:03:30.186  		
00:03:30.186  		'
00:03:30.186   22:30:30 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/config.json
00:03:30.186   22:30:30 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/log.txt
00:03:30.186   22:30:30 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc
00:03:30.186   22:30:30 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:30.186   22:30:30 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:30.186   22:30:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:03:30.186  ************************************
00:03:30.186  START TEST skip_rpc
00:03:30.186  ************************************
00:03:30.186   22:30:30 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc
00:03:30.186   22:30:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=26035
00:03:30.186   22:30:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:03:30.186   22:30:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1
00:03:30.186   22:30:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5
00:03:30.186  [2024-12-10 22:30:30.600151] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:03:30.186  [2024-12-10 22:30:30.600258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid26035 ]
00:03:30.186  [2024-12-10 22:30:30.728210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:03:30.186  [2024-12-10 22:30:30.870890] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:03:35.453   22:30:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version
00:03:35.453   22:30:35 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0
00:03:35.453   22:30:35 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version
00:03:35.453   22:30:35 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:03:35.453   22:30:35 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:03:35.453    22:30:35 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:03:35.453   22:30:35 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:03:35.453   22:30:35 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version
00:03:35.453   22:30:35 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:35.453   22:30:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:03:35.453   22:30:35 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:03:35.453   22:30:35 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1
00:03:35.453   22:30:35 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:03:35.453   22:30:35 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:03:35.453   22:30:35 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:03:35.453   22:30:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT
00:03:35.453   22:30:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 26035
00:03:35.453   22:30:35 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 26035 ']'
00:03:35.453   22:30:35 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 26035
00:03:35.453    22:30:35 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname
00:03:35.453   22:30:35 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:03:35.453    22:30:35 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 26035
00:03:35.453   22:30:35 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:03:35.453   22:30:35 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:03:35.453   22:30:35 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 26035'
00:03:35.453  killing process with pid 26035
00:03:35.453   22:30:35 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 26035
00:03:35.453   22:30:35 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 26035
00:03:37.989  
00:03:37.989  real	0m7.738s
00:03:37.989  user	0m7.217s
00:03:37.989  sys	0m0.522s
00:03:37.989   22:30:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:37.989   22:30:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:03:37.989  ************************************
00:03:37.989  END TEST skip_rpc
00:03:37.989  ************************************
00:03:37.989   22:30:38 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json
00:03:37.989   22:30:38 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:37.989   22:30:38 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:37.989   22:30:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:03:37.989  ************************************
00:03:37.989  START TEST skip_rpc_with_json
00:03:37.989  ************************************
00:03:37.989   22:30:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json
00:03:37.989   22:30:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config
00:03:37.989   22:30:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=27386
00:03:37.989   22:30:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:03:37.989   22:30:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 27386
00:03:37.989   22:30:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 27386 ']'
00:03:37.989   22:30:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:03:37.989   22:30:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:03:37.989   22:30:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100
00:03:37.989   22:30:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:03:37.989  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:03:37.989   22:30:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable
00:03:37.989   22:30:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:03:37.989  [2024-12-10 22:30:38.375982] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:03:37.989  [2024-12-10 22:30:38.376119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid27386 ]
00:03:37.989  [2024-12-10 22:30:38.506462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:03:37.989  [2024-12-10 22:30:38.643808] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:03:38.925   22:30:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:03:38.925   22:30:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0
00:03:38.925   22:30:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp
00:03:38.925   22:30:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:38.925   22:30:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:03:38.925  [2024-12-10 22:30:39.657386] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist
00:03:38.925  request:
00:03:38.925  {
00:03:38.926  "trtype": "tcp",
00:03:38.926  "method": "nvmf_get_transports",
00:03:38.926  "req_id": 1
00:03:38.926  }
00:03:38.926  Got JSON-RPC error response
00:03:38.926  response:
00:03:38.926  {
00:03:38.926  "code": -19,
00:03:38.926  "message": "No such device"
00:03:38.926  }
00:03:38.926   22:30:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:03:38.926   22:30:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp
00:03:38.926   22:30:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:38.926   22:30:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:03:38.926  [2024-12-10 22:30:39.665543] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:03:38.926   22:30:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:38.926   22:30:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config
00:03:38.926   22:30:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:03:38.926   22:30:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:03:39.184   22:30:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:03:39.184   22:30:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/config.json
00:03:39.184  {
00:03:39.184  "subsystems": [
00:03:39.184  {
00:03:39.184  "subsystem": "fsdev",
00:03:39.184  "config": [
00:03:39.184  {
00:03:39.184  "method": "fsdev_set_opts",
00:03:39.184  "params": {
00:03:39.184  "fsdev_io_pool_size": 65535,
00:03:39.184  "fsdev_io_cache_size": 256
00:03:39.184  }
00:03:39.184  }
00:03:39.184  ]
00:03:39.184  },
00:03:39.184  {
00:03:39.184  "subsystem": "vfio_user_target",
00:03:39.184  "config": null
00:03:39.184  },
00:03:39.184  {
00:03:39.184  "subsystem": "keyring",
00:03:39.184  "config": []
00:03:39.184  },
00:03:39.184  {
00:03:39.184  "subsystem": "iobuf",
00:03:39.184  "config": [
00:03:39.184  {
00:03:39.184  "method": "iobuf_set_options",
00:03:39.184  "params": {
00:03:39.184  "small_pool_count": 8192,
00:03:39.184  "large_pool_count": 1024,
00:03:39.184  "small_bufsize": 8192,
00:03:39.184  "large_bufsize": 135168,
00:03:39.184  "enable_numa": false
00:03:39.184  }
00:03:39.184  }
00:03:39.184  ]
00:03:39.184  },
00:03:39.184  {
00:03:39.184  "subsystem": "sock",
00:03:39.184  "config": [
00:03:39.184  {
00:03:39.184  "method": "sock_set_default_impl",
00:03:39.184  "params": {
00:03:39.184  "impl_name": "posix"
00:03:39.184  }
00:03:39.184  },
00:03:39.184  {
00:03:39.184  "method": "sock_impl_set_options",
00:03:39.184  "params": {
00:03:39.184  "impl_name": "ssl",
00:03:39.184  "recv_buf_size": 4096,
00:03:39.184  "send_buf_size": 4096,
00:03:39.184  "enable_recv_pipe": true,
00:03:39.184  "enable_quickack": false,
00:03:39.184  "enable_placement_id": 0,
00:03:39.184  "enable_zerocopy_send_server": true,
00:03:39.184  "enable_zerocopy_send_client": false,
00:03:39.184  "zerocopy_threshold": 0,
00:03:39.184  "tls_version": 0,
00:03:39.184  "enable_ktls": false
00:03:39.184  }
00:03:39.184  },
00:03:39.184  {
00:03:39.184  "method": "sock_impl_set_options",
00:03:39.184  "params": {
00:03:39.184  "impl_name": "posix",
00:03:39.184  "recv_buf_size": 2097152,
00:03:39.184  "send_buf_size": 2097152,
00:03:39.184  "enable_recv_pipe": true,
00:03:39.184  "enable_quickack": false,
00:03:39.184  "enable_placement_id": 0,
00:03:39.184  "enable_zerocopy_send_server": true,
00:03:39.184  "enable_zerocopy_send_client": false,
00:03:39.184  "zerocopy_threshold": 0,
00:03:39.184  "tls_version": 0,
00:03:39.184  "enable_ktls": false
00:03:39.184  }
00:03:39.184  }
00:03:39.184  ]
00:03:39.184  },
00:03:39.184  {
00:03:39.184  "subsystem": "vmd",
00:03:39.184  "config": []
00:03:39.184  },
00:03:39.184  {
00:03:39.184  "subsystem": "accel",
00:03:39.184  "config": [
00:03:39.184  {
00:03:39.184  "method": "accel_set_options",
00:03:39.184  "params": {
00:03:39.184  "small_cache_size": 128,
00:03:39.184  "large_cache_size": 16,
00:03:39.184  "task_count": 2048,
00:03:39.184  "sequence_count": 2048,
00:03:39.184  "buf_count": 2048
00:03:39.184  }
00:03:39.184  }
00:03:39.184  ]
00:03:39.184  },
00:03:39.184  {
00:03:39.184  "subsystem": "bdev",
00:03:39.184  "config": [
00:03:39.184  {
00:03:39.184  "method": "bdev_set_options",
00:03:39.184  "params": {
00:03:39.184  "bdev_io_pool_size": 65535,
00:03:39.184  "bdev_io_cache_size": 256,
00:03:39.184  "bdev_auto_examine": true,
00:03:39.184  "iobuf_small_cache_size": 128,
00:03:39.184  "iobuf_large_cache_size": 16
00:03:39.184  }
00:03:39.184  },
00:03:39.184  {
00:03:39.184  "method": "bdev_raid_set_options",
00:03:39.184  "params": {
00:03:39.184  "process_window_size_kb": 1024,
00:03:39.184  "process_max_bandwidth_mb_sec": 0
00:03:39.184  }
00:03:39.184  },
00:03:39.184  {
00:03:39.184  "method": "bdev_iscsi_set_options",
00:03:39.184  "params": {
00:03:39.184  "timeout_sec": 30
00:03:39.184  }
00:03:39.184  },
00:03:39.184  {
00:03:39.184  "method": "bdev_nvme_set_options",
00:03:39.184  "params": {
00:03:39.184  "action_on_timeout": "none",
00:03:39.184  "timeout_us": 0,
00:03:39.184  "timeout_admin_us": 0,
00:03:39.184  "keep_alive_timeout_ms": 10000,
00:03:39.184  "arbitration_burst": 0,
00:03:39.184  "low_priority_weight": 0,
00:03:39.184  "medium_priority_weight": 0,
00:03:39.184  "high_priority_weight": 0,
00:03:39.184  "nvme_adminq_poll_period_us": 10000,
00:03:39.185  "nvme_ioq_poll_period_us": 0,
00:03:39.185  "io_queue_requests": 0,
00:03:39.185  "delay_cmd_submit": true,
00:03:39.185  "transport_retry_count": 4,
00:03:39.185  "bdev_retry_count": 3,
00:03:39.185  "transport_ack_timeout": 0,
00:03:39.185  "ctrlr_loss_timeout_sec": 0,
00:03:39.185  "reconnect_delay_sec": 0,
00:03:39.185  "fast_io_fail_timeout_sec": 0,
00:03:39.185  "disable_auto_failback": false,
00:03:39.185  "generate_uuids": false,
00:03:39.185  "transport_tos": 0,
00:03:39.185  "nvme_error_stat": false,
00:03:39.185  "rdma_srq_size": 0,
00:03:39.185  "io_path_stat": false,
00:03:39.185  "allow_accel_sequence": false,
00:03:39.185  "rdma_max_cq_size": 0,
00:03:39.185  "rdma_cm_event_timeout_ms": 0,
00:03:39.185  "dhchap_digests": [
00:03:39.185  "sha256",
00:03:39.185  "sha384",
00:03:39.185  "sha512"
00:03:39.185  ],
00:03:39.185  "dhchap_dhgroups": [
00:03:39.185  "null",
00:03:39.185  "ffdhe2048",
00:03:39.185  "ffdhe3072",
00:03:39.185  "ffdhe4096",
00:03:39.185  "ffdhe6144",
00:03:39.185  "ffdhe8192"
00:03:39.185  ],
00:03:39.185  "rdma_umr_per_io": false
00:03:39.185  }
00:03:39.185  },
00:03:39.185  {
00:03:39.185  "method": "bdev_nvme_set_hotplug",
00:03:39.185  "params": {
00:03:39.185  "period_us": 100000,
00:03:39.185  "enable": false
00:03:39.185  }
00:03:39.185  },
00:03:39.185  {
00:03:39.185  "method": "bdev_wait_for_examine"
00:03:39.185  }
00:03:39.185  ]
00:03:39.185  },
00:03:39.185  {
00:03:39.185  "subsystem": "scsi",
00:03:39.185  "config": null
00:03:39.185  },
00:03:39.185  {
00:03:39.185  "subsystem": "scheduler",
00:03:39.185  "config": [
00:03:39.185  {
00:03:39.185  "method": "framework_set_scheduler",
00:03:39.185  "params": {
00:03:39.185  "name": "static"
00:03:39.185  }
00:03:39.185  }
00:03:39.185  ]
00:03:39.185  },
00:03:39.185  {
00:03:39.185  "subsystem": "vhost_scsi",
00:03:39.185  "config": []
00:03:39.185  },
00:03:39.185  {
00:03:39.185  "subsystem": "vhost_blk",
00:03:39.185  "config": []
00:03:39.185  },
00:03:39.185  {
00:03:39.185  "subsystem": "ublk",
00:03:39.185  "config": []
00:03:39.185  },
00:03:39.185  {
00:03:39.185  "subsystem": "nbd",
00:03:39.185  "config": []
00:03:39.185  },
00:03:39.185  {
00:03:39.185  "subsystem": "nvmf",
00:03:39.185  "config": [
00:03:39.185  {
00:03:39.185  "method": "nvmf_set_config",
00:03:39.185  "params": {
00:03:39.185  "discovery_filter": "match_any",
00:03:39.185  "admin_cmd_passthru": {
00:03:39.185  "identify_ctrlr": false
00:03:39.185  },
00:03:39.185  "dhchap_digests": [
00:03:39.185  "sha256",
00:03:39.185  "sha384",
00:03:39.185  "sha512"
00:03:39.185  ],
00:03:39.185  "dhchap_dhgroups": [
00:03:39.185  "null",
00:03:39.185  "ffdhe2048",
00:03:39.185  "ffdhe3072",
00:03:39.185  "ffdhe4096",
00:03:39.185  "ffdhe6144",
00:03:39.185  "ffdhe8192"
00:03:39.185  ]
00:03:39.185  }
00:03:39.185  },
00:03:39.185  {
00:03:39.185  "method": "nvmf_set_max_subsystems",
00:03:39.185  "params": {
00:03:39.185  "max_subsystems": 1024
00:03:39.185  }
00:03:39.185  },
00:03:39.185  {
00:03:39.185  "method": "nvmf_set_crdt",
00:03:39.185  "params": {
00:03:39.185  "crdt1": 0,
00:03:39.185  "crdt2": 0,
00:03:39.185  "crdt3": 0
00:03:39.185  }
00:03:39.185  },
00:03:39.185  {
00:03:39.185  "method": "nvmf_create_transport",
00:03:39.185  "params": {
00:03:39.185  "trtype": "TCP",
00:03:39.185  "max_queue_depth": 128,
00:03:39.185  "max_io_qpairs_per_ctrlr": 127,
00:03:39.185  "in_capsule_data_size": 4096,
00:03:39.185  "max_io_size": 131072,
00:03:39.185  "io_unit_size": 131072,
00:03:39.185  "max_aq_depth": 128,
00:03:39.185  "num_shared_buffers": 511,
00:03:39.185  "buf_cache_size": 4294967295,
00:03:39.185  "dif_insert_or_strip": false,
00:03:39.185  "zcopy": false,
00:03:39.185  "c2h_success": true,
00:03:39.185  "sock_priority": 0,
00:03:39.185  "abort_timeout_sec": 1,
00:03:39.185  "ack_timeout": 0,
00:03:39.185  "data_wr_pool_size": 0
00:03:39.185  }
00:03:39.185  }
00:03:39.185  ]
00:03:39.185  },
00:03:39.185  {
00:03:39.185  "subsystem": "iscsi",
00:03:39.185  "config": [
00:03:39.185  {
00:03:39.185  "method": "iscsi_set_options",
00:03:39.185  "params": {
00:03:39.185  "node_base": "iqn.2016-06.io.spdk",
00:03:39.185  "max_sessions": 128,
00:03:39.185  "max_connections_per_session": 2,
00:03:39.185  "max_queue_depth": 64,
00:03:39.185  "default_time2wait": 2,
00:03:39.185  "default_time2retain": 20,
00:03:39.185  "first_burst_length": 8192,
00:03:39.185  "immediate_data": true,
00:03:39.185  "allow_duplicated_isid": false,
00:03:39.185  "error_recovery_level": 0,
00:03:39.185  "nop_timeout": 60,
00:03:39.185  "nop_in_interval": 30,
00:03:39.185  "disable_chap": false,
00:03:39.185  "require_chap": false,
00:03:39.185  "mutual_chap": false,
00:03:39.185  "chap_group": 0,
00:03:39.185  "max_large_datain_per_connection": 64,
00:03:39.185  "max_r2t_per_connection": 4,
00:03:39.185  "pdu_pool_size": 36864,
00:03:39.185  "immediate_data_pool_size": 16384,
00:03:39.185  "data_out_pool_size": 2048
00:03:39.185  }
00:03:39.185  }
00:03:39.185  ]
00:03:39.185  }
00:03:39.185  ]
00:03:39.185  }
00:03:39.185   22:30:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT
00:03:39.185   22:30:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 27386
00:03:39.185   22:30:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 27386 ']'
00:03:39.185   22:30:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 27386
00:03:39.185    22:30:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname
00:03:39.185   22:30:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:03:39.185    22:30:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 27386
00:03:39.185   22:30:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:03:39.185   22:30:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:03:39.185   22:30:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 27386'
00:03:39.185  killing process with pid 27386
00:03:39.185   22:30:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 27386
00:03:39.185   22:30:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 27386
00:03:42.469   22:30:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=28128
00:03:42.469   22:30:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/config.json
00:03:42.469   22:30:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5
00:03:47.736   22:30:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 28128
00:03:47.736   22:30:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 28128 ']'
00:03:47.736   22:30:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 28128
00:03:47.736    22:30:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname
00:03:47.736   22:30:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:03:47.736    22:30:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 28128
00:03:47.736   22:30:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:03:47.736   22:30:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:03:47.736   22:30:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 28128'
00:03:47.736  killing process with pid 28128
00:03:47.736   22:30:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 28128
00:03:47.736   22:30:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 28128
00:03:49.638   22:30:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/log.txt
00:03:49.638   22:30:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/log.txt
00:03:49.638  
00:03:49.638  real	0m11.995s
00:03:49.638  user	0m11.347s
00:03:49.639  sys	0m1.098s
00:03:49.639   22:30:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:49.639   22:30:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:03:49.639  ************************************
00:03:49.639  END TEST skip_rpc_with_json
00:03:49.639  ************************************
00:03:49.639   22:30:50 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay
00:03:49.639   22:30:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:49.639   22:30:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:49.639   22:30:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:03:49.639  ************************************
00:03:49.639  START TEST skip_rpc_with_delay
00:03:49.639  ************************************
00:03:49.639   22:30:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay
00:03:49.639   22:30:50 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:03:49.639   22:30:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0
00:03:49.639   22:30:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:03:49.639   22:30:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:03:49.639   22:30:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:03:49.639    22:30:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:03:49.639   22:30:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:03:49.639    22:30:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:03:49.639   22:30:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:03:49.639   22:30:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:03:49.639   22:30:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt ]]
00:03:49.639   22:30:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:03:49.639  [2024-12-10 22:30:50.414272] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started.
00:03:49.897   22:30:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1
00:03:49.897   22:30:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:03:49.897   22:30:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:03:49.897   22:30:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:03:49.897  
00:03:49.897  real	0m0.144s
00:03:49.897  user	0m0.072s
00:03:49.897  sys	0m0.071s
00:03:49.897   22:30:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:49.898   22:30:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x
00:03:49.898  ************************************
00:03:49.898  END TEST skip_rpc_with_delay
00:03:49.898  ************************************
00:03:49.898    22:30:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname
00:03:49.898   22:30:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']'
00:03:49.898   22:30:50 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init
00:03:49.898   22:30:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:49.898   22:30:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:49.898   22:30:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:03:49.898  ************************************
00:03:49.898  START TEST exit_on_failed_rpc_init
00:03:49.898  ************************************
00:03:49.898   22:30:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init
00:03:49.898   22:30:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=29616
00:03:49.898   22:30:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:03:49.898   22:30:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 29616
00:03:49.898   22:30:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 29616 ']'
00:03:49.898   22:30:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:03:49.898   22:30:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100
00:03:49.898   22:30:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:03:49.898  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:03:49.898   22:30:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable
00:03:49.898   22:30:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x
00:03:49.898  [2024-12-10 22:30:50.607705] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:03:49.898  [2024-12-10 22:30:50.607839] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid29616 ]
00:03:50.156  [2024-12-10 22:30:50.743951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:03:50.156  [2024-12-10 22:30:50.882225] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:03:51.533   22:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:03:51.533   22:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0
00:03:51.533   22:30:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:03:51.533   22:30:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2
00:03:51.533   22:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0
00:03:51.533   22:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2
00:03:51.533   22:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:03:51.533   22:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:03:51.533    22:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:03:51.533   22:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:03:51.533    22:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:03:51.533   22:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:03:51.533   22:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:03:51.533   22:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt ]]
00:03:51.533   22:30:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2
00:03:51.533  [2024-12-10 22:30:51.997133] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:03:51.533  [2024-12-10 22:30:51.997258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid29841 ]
00:03:51.533  [2024-12-10 22:30:52.128283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:03:51.533  [2024-12-10 22:30:52.268439] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:03:51.533  [2024-12-10 22:30:52.268561] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another.
00:03:51.533  [2024-12-10 22:30:52.268592] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock
00:03:51.533  [2024-12-10 22:30:52.268614] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:03:51.793   22:30:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234
00:03:51.793   22:30:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:03:51.793   22:30:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106
00:03:51.793   22:30:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in
00:03:51.793   22:30:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1
00:03:51.793   22:30:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:03:51.793   22:30:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT
00:03:51.793   22:30:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 29616
00:03:51.793   22:30:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 29616 ']'
00:03:51.793   22:30:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 29616
00:03:51.793    22:30:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname
00:03:51.793   22:30:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:03:52.052    22:30:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 29616
00:03:52.052   22:30:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:03:52.052   22:30:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:03:52.052   22:30:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 29616'
00:03:52.052  killing process with pid 29616
00:03:52.052   22:30:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 29616
00:03:52.052   22:30:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 29616
00:03:54.588  
00:03:54.588  real	0m4.764s
00:03:54.588  user	0m5.119s
00:03:54.588  sys	0m0.748s
00:03:54.588   22:30:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:54.588   22:30:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x
00:03:54.588  ************************************
00:03:54.588  END TEST exit_on_failed_rpc_init
00:03:54.588  ************************************
00:03:54.588   22:30:55 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/config.json
00:03:54.588  
00:03:54.588  real	0m24.942s
00:03:54.588  user	0m23.910s
00:03:54.588  sys	0m2.602s
00:03:54.588   22:30:55 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:54.588   22:30:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:03:54.588  ************************************
00:03:54.588  END TEST skip_rpc
00:03:54.588  ************************************
00:03:54.588   22:30:55  -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_client/rpc_client.sh
00:03:54.588   22:30:55  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:54.588   22:30:55  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:54.588   22:30:55  -- common/autotest_common.sh@10 -- # set +x
00:03:54.588  ************************************
00:03:54.588  START TEST rpc_client
00:03:54.588  ************************************
00:03:54.588   22:30:55 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_client/rpc_client.sh
00:03:54.847  * Looking for test storage...
00:03:54.847  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_client
00:03:54.847    22:30:55 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:03:54.847     22:30:55 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version
00:03:54.847     22:30:55 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:03:54.847    22:30:55 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:03:54.847    22:30:55 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:03:54.847    22:30:55 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l
00:03:54.847    22:30:55 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l
00:03:54.847    22:30:55 rpc_client -- scripts/common.sh@336 -- # IFS=.-:
00:03:54.847    22:30:55 rpc_client -- scripts/common.sh@336 -- # read -ra ver1
00:03:54.847    22:30:55 rpc_client -- scripts/common.sh@337 -- # IFS=.-:
00:03:54.847    22:30:55 rpc_client -- scripts/common.sh@337 -- # read -ra ver2
00:03:54.847    22:30:55 rpc_client -- scripts/common.sh@338 -- # local 'op=<'
00:03:54.847    22:30:55 rpc_client -- scripts/common.sh@340 -- # ver1_l=2
00:03:54.847    22:30:55 rpc_client -- scripts/common.sh@341 -- # ver2_l=1
00:03:54.847    22:30:55 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:03:54.847    22:30:55 rpc_client -- scripts/common.sh@344 -- # case "$op" in
00:03:54.847    22:30:55 rpc_client -- scripts/common.sh@345 -- # : 1
00:03:54.847    22:30:55 rpc_client -- scripts/common.sh@364 -- # (( v = 0 ))
00:03:54.847    22:30:55 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:03:54.847     22:30:55 rpc_client -- scripts/common.sh@365 -- # decimal 1
00:03:54.847     22:30:55 rpc_client -- scripts/common.sh@353 -- # local d=1
00:03:54.847     22:30:55 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:03:54.847     22:30:55 rpc_client -- scripts/common.sh@355 -- # echo 1
00:03:54.847    22:30:55 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1
00:03:54.847     22:30:55 rpc_client -- scripts/common.sh@366 -- # decimal 2
00:03:54.847     22:30:55 rpc_client -- scripts/common.sh@353 -- # local d=2
00:03:54.847     22:30:55 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:03:54.847     22:30:55 rpc_client -- scripts/common.sh@355 -- # echo 2
00:03:54.847    22:30:55 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2
00:03:54.847    22:30:55 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:03:54.847    22:30:55 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:03:54.847    22:30:55 rpc_client -- scripts/common.sh@368 -- # return 0
00:03:54.847    22:30:55 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:03:54.847    22:30:55 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:03:54.847  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:54.847  		--rc genhtml_branch_coverage=1
00:03:54.847  		--rc genhtml_function_coverage=1
00:03:54.847  		--rc genhtml_legend=1
00:03:54.847  		--rc geninfo_all_blocks=1
00:03:54.847  		--rc geninfo_unexecuted_blocks=1
00:03:54.847  		
00:03:54.847  		'
00:03:54.847    22:30:55 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:03:54.847  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:54.847  		--rc genhtml_branch_coverage=1
00:03:54.847  		--rc genhtml_function_coverage=1
00:03:54.847  		--rc genhtml_legend=1
00:03:54.847  		--rc geninfo_all_blocks=1
00:03:54.847  		--rc geninfo_unexecuted_blocks=1
00:03:54.847  		
00:03:54.847  		'
00:03:54.847    22:30:55 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:03:54.847  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:54.847  		--rc genhtml_branch_coverage=1
00:03:54.847  		--rc genhtml_function_coverage=1
00:03:54.847  		--rc genhtml_legend=1
00:03:54.847  		--rc geninfo_all_blocks=1
00:03:54.847  		--rc geninfo_unexecuted_blocks=1
00:03:54.847  		
00:03:54.847  		'
00:03:54.847    22:30:55 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:03:54.847  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:54.847  		--rc genhtml_branch_coverage=1
00:03:54.847  		--rc genhtml_function_coverage=1
00:03:54.847  		--rc genhtml_legend=1
00:03:54.847  		--rc geninfo_all_blocks=1
00:03:54.847  		--rc geninfo_unexecuted_blocks=1
00:03:54.847  		
00:03:54.847  		'
00:03:54.847   22:30:55 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_client/rpc_client_test
00:03:54.847  OK
00:03:54.847   22:30:55 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT
00:03:54.847  
00:03:54.847  real	0m0.155s
00:03:54.848  user	0m0.091s
00:03:54.848  sys	0m0.071s
00:03:54.848   22:30:55 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:54.848   22:30:55 rpc_client -- common/autotest_common.sh@10 -- # set +x
00:03:54.848  ************************************
00:03:54.848  END TEST rpc_client
00:03:54.848  ************************************
00:03:54.848   22:30:55  -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/json_config.sh
00:03:54.848   22:30:55  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:54.848   22:30:55  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:54.848   22:30:55  -- common/autotest_common.sh@10 -- # set +x
00:03:54.848  ************************************
00:03:54.848  START TEST json_config
00:03:54.848  ************************************
00:03:54.848   22:30:55 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/json_config.sh
00:03:54.848    22:30:55 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:03:54.848     22:30:55 json_config -- common/autotest_common.sh@1711 -- # lcov --version
00:03:54.848     22:30:55 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:03:54.848    22:30:55 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:03:54.848    22:30:55 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:03:54.848    22:30:55 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l
00:03:54.848    22:30:55 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l
00:03:54.848    22:30:55 json_config -- scripts/common.sh@336 -- # IFS=.-:
00:03:54.848    22:30:55 json_config -- scripts/common.sh@336 -- # read -ra ver1
00:03:54.848    22:30:55 json_config -- scripts/common.sh@337 -- # IFS=.-:
00:03:54.848    22:30:55 json_config -- scripts/common.sh@337 -- # read -ra ver2
00:03:54.848    22:30:55 json_config -- scripts/common.sh@338 -- # local 'op=<'
00:03:54.848    22:30:55 json_config -- scripts/common.sh@340 -- # ver1_l=2
00:03:54.848    22:30:55 json_config -- scripts/common.sh@341 -- # ver2_l=1
00:03:54.848    22:30:55 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:03:54.848    22:30:55 json_config -- scripts/common.sh@344 -- # case "$op" in
00:03:54.848    22:30:55 json_config -- scripts/common.sh@345 -- # : 1
00:03:54.848    22:30:55 json_config -- scripts/common.sh@364 -- # (( v = 0 ))
00:03:54.848    22:30:55 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:03:54.848     22:30:55 json_config -- scripts/common.sh@365 -- # decimal 1
00:03:54.848     22:30:55 json_config -- scripts/common.sh@353 -- # local d=1
00:03:54.848     22:30:55 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:03:54.848     22:30:55 json_config -- scripts/common.sh@355 -- # echo 1
00:03:54.848    22:30:55 json_config -- scripts/common.sh@365 -- # ver1[v]=1
00:03:54.848     22:30:55 json_config -- scripts/common.sh@366 -- # decimal 2
00:03:54.848     22:30:55 json_config -- scripts/common.sh@353 -- # local d=2
00:03:54.848     22:30:55 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:03:54.848     22:30:55 json_config -- scripts/common.sh@355 -- # echo 2
00:03:54.848    22:30:55 json_config -- scripts/common.sh@366 -- # ver2[v]=2
00:03:54.848    22:30:55 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:03:54.848    22:30:55 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:03:54.848    22:30:55 json_config -- scripts/common.sh@368 -- # return 0
00:03:54.848    22:30:55 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:03:54.848    22:30:55 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:03:54.848  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:54.848  		--rc genhtml_branch_coverage=1
00:03:54.848  		--rc genhtml_function_coverage=1
00:03:54.848  		--rc genhtml_legend=1
00:03:54.848  		--rc geninfo_all_blocks=1
00:03:54.848  		--rc geninfo_unexecuted_blocks=1
00:03:54.848  		
00:03:54.848  		'
00:03:54.848    22:30:55 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:03:54.848  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:54.848  		--rc genhtml_branch_coverage=1
00:03:54.848  		--rc genhtml_function_coverage=1
00:03:54.848  		--rc genhtml_legend=1
00:03:54.848  		--rc geninfo_all_blocks=1
00:03:54.848  		--rc geninfo_unexecuted_blocks=1
00:03:54.848  		
00:03:54.848  		'
00:03:54.848    22:30:55 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:03:54.848  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:54.848  		--rc genhtml_branch_coverage=1
00:03:54.848  		--rc genhtml_function_coverage=1
00:03:54.848  		--rc genhtml_legend=1
00:03:54.848  		--rc geninfo_all_blocks=1
00:03:54.848  		--rc geninfo_unexecuted_blocks=1
00:03:54.848  		
00:03:54.848  		'
00:03:54.848    22:30:55 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:03:54.848  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:54.848  		--rc genhtml_branch_coverage=1
00:03:54.848  		--rc genhtml_function_coverage=1
00:03:54.848  		--rc genhtml_legend=1
00:03:54.848  		--rc geninfo_all_blocks=1
00:03:54.848  		--rc geninfo_unexecuted_blocks=1
00:03:54.848  		
00:03:54.848  		'
00:03:54.848   22:30:55 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/nvmf/common.sh
00:03:54.848     22:30:55 json_config -- nvmf/common.sh@7 -- # uname -s
00:03:55.108    22:30:55 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:03:55.108    22:30:55 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:03:55.108    22:30:55 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:03:55.108    22:30:55 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:03:55.108    22:30:55 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:03:55.108    22:30:55 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:03:55.108    22:30:55 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:03:55.108    22:30:55 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:03:55.108    22:30:55 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:03:55.108     22:30:55 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:03:55.108    22:30:55 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:808ec059-55a7-e511-906e-0012795d96dd
00:03:55.108    22:30:55 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=808ec059-55a7-e511-906e-0012795d96dd
00:03:55.108    22:30:55 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:03:55.108    22:30:55 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:03:55.108    22:30:55 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:03:55.108    22:30:55 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:03:55.108    22:30:55 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/common.sh
00:03:55.108     22:30:55 json_config -- scripts/common.sh@15 -- # shopt -s extglob
00:03:55.108     22:30:55 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:03:55.108     22:30:55 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:03:55.108     22:30:55 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:03:55.108      22:30:55 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:03:55.108      22:30:55 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:03:55.108      22:30:55 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:03:55.108      22:30:55 json_config -- paths/export.sh@5 -- # export PATH
00:03:55.109      22:30:55 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:03:55.109    22:30:55 json_config -- nvmf/common.sh@51 -- # : 0
00:03:55.109    22:30:55 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:03:55.109    22:30:55 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:03:55.109    22:30:55 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:03:55.109    22:30:55 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:03:55.109    22:30:55 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:03:55.109    22:30:55 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:03:55.109  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:03:55.109    22:30:55 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:03:55.109    22:30:55 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:03:55.109    22:30:55 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0
00:03:55.109   22:30:55 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/common.sh
00:03:55.109   22:30:55 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]]
00:03:55.109   22:30:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]]
00:03:55.109   22:30:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]]
00:03:55.109   22:30:55 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + 	SPDK_TEST_ISCSI + 	SPDK_TEST_NVMF + 	SPDK_TEST_VHOST + 	SPDK_TEST_VHOST_INIT + 	SPDK_TEST_RBD == 0 ))
00:03:55.109   22:30:55 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests'
00:03:55.109  WARNING: No tests are enabled so not running JSON configuration tests
00:03:55.109   22:30:55 json_config -- json_config/json_config.sh@28 -- # exit 0
00:03:55.109  
00:03:55.109  real	0m0.127s
00:03:55.109  user	0m0.089s
00:03:55.109  sys	0m0.042s
00:03:55.109   22:30:55 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:55.109   22:30:55 json_config -- common/autotest_common.sh@10 -- # set +x
00:03:55.109  ************************************
00:03:55.109  END TEST json_config
00:03:55.109  ************************************
00:03:55.109   22:30:55  -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/json_config_extra_key.sh
00:03:55.109   22:30:55  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:55.109   22:30:55  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:55.109   22:30:55  -- common/autotest_common.sh@10 -- # set +x
00:03:55.109  ************************************
00:03:55.109  START TEST json_config_extra_key
00:03:55.109  ************************************
00:03:55.109   22:30:55 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/json_config_extra_key.sh
00:03:55.109    22:30:55 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:03:55.109     22:30:55 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:03:55.109     22:30:55 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version
00:03:55.109    22:30:55 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:03:55.109    22:30:55 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:03:55.109    22:30:55 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l
00:03:55.109    22:30:55 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l
00:03:55.109    22:30:55 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-:
00:03:55.109    22:30:55 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1
00:03:55.109    22:30:55 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-:
00:03:55.109    22:30:55 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2
00:03:55.109    22:30:55 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<'
00:03:55.109    22:30:55 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2
00:03:55.109    22:30:55 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1
00:03:55.109    22:30:55 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:03:55.109    22:30:55 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in
00:03:55.109    22:30:55 json_config_extra_key -- scripts/common.sh@345 -- # : 1
00:03:55.109    22:30:55 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 ))
00:03:55.109    22:30:55 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:03:55.109     22:30:55 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1
00:03:55.109     22:30:55 json_config_extra_key -- scripts/common.sh@353 -- # local d=1
00:03:55.109     22:30:55 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:03:55.109     22:30:55 json_config_extra_key -- scripts/common.sh@355 -- # echo 1
00:03:55.109    22:30:55 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1
00:03:55.109     22:30:55 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2
00:03:55.109     22:30:55 json_config_extra_key -- scripts/common.sh@353 -- # local d=2
00:03:55.109     22:30:55 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:03:55.109     22:30:55 json_config_extra_key -- scripts/common.sh@355 -- # echo 2
00:03:55.109    22:30:55 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2
00:03:55.109    22:30:55 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:03:55.109    22:30:55 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:03:55.109    22:30:55 json_config_extra_key -- scripts/common.sh@368 -- # return 0
00:03:55.109    22:30:55 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:03:55.109    22:30:55 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:03:55.109  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:55.109  		--rc genhtml_branch_coverage=1
00:03:55.109  		--rc genhtml_function_coverage=1
00:03:55.109  		--rc genhtml_legend=1
00:03:55.109  		--rc geninfo_all_blocks=1
00:03:55.109  		--rc geninfo_unexecuted_blocks=1
00:03:55.109  		
00:03:55.109  		'
00:03:55.109    22:30:55 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:03:55.109  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:55.109  		--rc genhtml_branch_coverage=1
00:03:55.109  		--rc genhtml_function_coverage=1
00:03:55.109  		--rc genhtml_legend=1
00:03:55.109  		--rc geninfo_all_blocks=1
00:03:55.109  		--rc geninfo_unexecuted_blocks=1
00:03:55.109  		
00:03:55.109  		'
00:03:55.109    22:30:55 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:03:55.109  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:55.109  		--rc genhtml_branch_coverage=1
00:03:55.109  		--rc genhtml_function_coverage=1
00:03:55.109  		--rc genhtml_legend=1
00:03:55.109  		--rc geninfo_all_blocks=1
00:03:55.109  		--rc geninfo_unexecuted_blocks=1
00:03:55.109  		
00:03:55.109  		'
00:03:55.109    22:30:55 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:03:55.109  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:55.109  		--rc genhtml_branch_coverage=1
00:03:55.109  		--rc genhtml_function_coverage=1
00:03:55.109  		--rc genhtml_legend=1
00:03:55.109  		--rc geninfo_all_blocks=1
00:03:55.109  		--rc geninfo_unexecuted_blocks=1
00:03:55.109  		
00:03:55.109  		'
00:03:55.109   22:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/nvmf/common.sh
00:03:55.109     22:30:55 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s
00:03:55.109    22:30:55 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:03:55.109    22:30:55 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:03:55.109    22:30:55 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:03:55.109    22:30:55 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:03:55.109    22:30:55 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:03:55.109    22:30:55 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:03:55.109    22:30:55 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:03:55.109    22:30:55 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:03:55.109    22:30:55 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:03:55.109     22:30:55 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:03:55.109    22:30:55 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:808ec059-55a7-e511-906e-0012795d96dd
00:03:55.109    22:30:55 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=808ec059-55a7-e511-906e-0012795d96dd
00:03:55.109    22:30:55 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:03:55.109    22:30:55 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:03:55.109    22:30:55 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:03:55.109    22:30:55 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:03:55.109    22:30:55 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/common.sh
00:03:55.109     22:30:55 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob
00:03:55.109     22:30:55 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:03:55.109     22:30:55 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:03:55.109     22:30:55 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:03:55.109      22:30:55 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:03:55.109      22:30:55 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:03:55.109      22:30:55 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:03:55.109      22:30:55 json_config_extra_key -- paths/export.sh@5 -- # export PATH
00:03:55.109      22:30:55 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:03:55.109    22:30:55 json_config_extra_key -- nvmf/common.sh@51 -- # : 0
00:03:55.109    22:30:55 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:03:55.109    22:30:55 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:03:55.109    22:30:55 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:03:55.110    22:30:55 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:03:55.110    22:30:55 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:03:55.110    22:30:55 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:03:55.110  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:03:55.110    22:30:55 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:03:55.110    22:30:55 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:03:55.110    22:30:55 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0
00:03:55.110   22:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/common.sh
00:03:55.110   22:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='')
00:03:55.110   22:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid
00:03:55.110   22:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock')
00:03:55.110   22:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket
00:03:55.110   22:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024')
00:03:55.110   22:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params
00:03:55.110   22:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/extra_key.json')
00:03:55.110   22:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path
00:03:55.110   22:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:03:55.110   22:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...'
00:03:55.110  INFO: launching applications...
00:03:55.110   22:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/extra_key.json
00:03:55.110   22:30:55 json_config_extra_key -- json_config/common.sh@9 -- # local app=target
00:03:55.110   22:30:55 json_config_extra_key -- json_config/common.sh@10 -- # shift
00:03:55.110   22:30:55 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]]
00:03:55.110   22:30:55 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]]
00:03:55.110   22:30:55 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params=
00:03:55.110   22:30:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:03:55.110   22:30:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:03:55.110   22:30:55 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=30625
00:03:55.110   22:30:55 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/extra_key.json
00:03:55.110   22:30:55 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...'
00:03:55.110  Waiting for target to run...
00:03:55.110   22:30:55 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 30625 /var/tmp/spdk_tgt.sock
00:03:55.110   22:30:55 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 30625 ']'
00:03:55.110   22:30:55 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:03:55.110   22:30:55 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100
00:03:55.110   22:30:55 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:03:55.110  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:03:55.110   22:30:55 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable
00:03:55.110   22:30:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x
00:03:55.369  [2024-12-10 22:30:55.901373] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:03:55.369  [2024-12-10 22:30:55.901486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid30625 ]
00:03:55.628  [2024-12-10 22:30:56.294571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:03:55.887  [2024-12-10 22:30:56.424703] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:03:56.455   22:30:57 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:03:56.455   22:30:57 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0
00:03:56.455   22:30:57 json_config_extra_key -- json_config/common.sh@26 -- # echo ''
00:03:56.455  
00:03:56.455   22:30:57 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...'
00:03:56.455  INFO: shutting down applications...
00:03:56.714   22:30:57 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target
00:03:56.714   22:30:57 json_config_extra_key -- json_config/common.sh@31 -- # local app=target
00:03:56.714   22:30:57 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]]
00:03:56.714   22:30:57 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 30625 ]]
00:03:56.714   22:30:57 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 30625
00:03:56.714   22:30:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 ))
00:03:56.714   22:30:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:03:56.714   22:30:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 30625
00:03:56.714   22:30:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:03:56.973   22:30:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:03:56.973   22:30:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:03:56.973   22:30:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 30625
00:03:56.973   22:30:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:03:57.541   22:30:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:03:57.541   22:30:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:03:57.541   22:30:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 30625
00:03:57.541   22:30:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:03:58.109   22:30:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:03:58.109   22:30:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:03:58.109   22:30:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 30625
00:03:58.109   22:30:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:03:58.678   22:30:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:03:58.678   22:30:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:03:58.678   22:30:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 30625
00:03:58.678   22:30:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:03:59.245   22:30:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:03:59.245   22:30:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:03:59.245   22:30:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 30625
00:03:59.245   22:30:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:03:59.504   22:31:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:03:59.504   22:31:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:03:59.505   22:31:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 30625
00:03:59.505   22:31:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:04:00.072   22:31:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:04:00.072   22:31:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:04:00.072   22:31:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 30625
00:04:00.072   22:31:00 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]=
00:04:00.072   22:31:00 json_config_extra_key -- json_config/common.sh@43 -- # break
00:04:00.072   22:31:00 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]]
00:04:00.072   22:31:00 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done'
00:04:00.072  SPDK target shutdown done
00:04:00.072   22:31:00 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success
00:04:00.072  Success
00:04:00.072  
00:04:00.073  real	0m5.064s
00:04:00.073  user	0m4.528s
00:04:00.073  sys	0m0.620s
00:04:00.073   22:31:00 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:00.073   22:31:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x
00:04:00.073  ************************************
00:04:00.073  END TEST json_config_extra_key
00:04:00.073  ************************************
00:04:00.073   22:31:00  -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:04:00.073   22:31:00  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:00.073   22:31:00  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:00.073   22:31:00  -- common/autotest_common.sh@10 -- # set +x
00:04:00.073  ************************************
00:04:00.073  START TEST alias_rpc
00:04:00.073  ************************************
00:04:00.073   22:31:00 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:04:00.332  * Looking for test storage...
00:04:00.332  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/alias_rpc
00:04:00.332    22:31:00 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:04:00.332     22:31:00 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:04:00.332     22:31:00 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:04:00.332    22:31:00 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:04:00.332    22:31:00 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:00.332    22:31:00 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:00.332    22:31:00 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:00.332    22:31:00 alias_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:04:00.332    22:31:00 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:04:00.332    22:31:00 alias_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:04:00.332    22:31:00 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:04:00.332    22:31:00 alias_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:04:00.332    22:31:00 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:04:00.332    22:31:00 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:04:00.332    22:31:00 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:00.332    22:31:00 alias_rpc -- scripts/common.sh@344 -- # case "$op" in
00:04:00.332    22:31:00 alias_rpc -- scripts/common.sh@345 -- # : 1
00:04:00.332    22:31:00 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:00.332    22:31:00 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:00.332     22:31:00 alias_rpc -- scripts/common.sh@365 -- # decimal 1
00:04:00.332     22:31:00 alias_rpc -- scripts/common.sh@353 -- # local d=1
00:04:00.332     22:31:00 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:00.332     22:31:00 alias_rpc -- scripts/common.sh@355 -- # echo 1
00:04:00.332    22:31:00 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:04:00.332     22:31:00 alias_rpc -- scripts/common.sh@366 -- # decimal 2
00:04:00.332     22:31:00 alias_rpc -- scripts/common.sh@353 -- # local d=2
00:04:00.332     22:31:00 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:00.332     22:31:00 alias_rpc -- scripts/common.sh@355 -- # echo 2
00:04:00.332    22:31:00 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:04:00.332    22:31:00 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:00.332    22:31:00 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:00.332    22:31:00 alias_rpc -- scripts/common.sh@368 -- # return 0
00:04:00.332    22:31:00 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:00.332    22:31:00 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:04:00.332  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:00.332  		--rc genhtml_branch_coverage=1
00:04:00.332  		--rc genhtml_function_coverage=1
00:04:00.332  		--rc genhtml_legend=1
00:04:00.332  		--rc geninfo_all_blocks=1
00:04:00.332  		--rc geninfo_unexecuted_blocks=1
00:04:00.332  		
00:04:00.332  		'
00:04:00.332    22:31:00 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:04:00.332  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:00.332  		--rc genhtml_branch_coverage=1
00:04:00.332  		--rc genhtml_function_coverage=1
00:04:00.332  		--rc genhtml_legend=1
00:04:00.332  		--rc geninfo_all_blocks=1
00:04:00.332  		--rc geninfo_unexecuted_blocks=1
00:04:00.332  		
00:04:00.332  		'
00:04:00.332    22:31:00 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:04:00.332  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:00.332  		--rc genhtml_branch_coverage=1
00:04:00.332  		--rc genhtml_function_coverage=1
00:04:00.332  		--rc genhtml_legend=1
00:04:00.332  		--rc geninfo_all_blocks=1
00:04:00.332  		--rc geninfo_unexecuted_blocks=1
00:04:00.332  		
00:04:00.332  		'
00:04:00.332    22:31:00 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:04:00.332  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:00.332  		--rc genhtml_branch_coverage=1
00:04:00.332  		--rc genhtml_function_coverage=1
00:04:00.332  		--rc genhtml_legend=1
00:04:00.332  		--rc geninfo_all_blocks=1
00:04:00.332  		--rc geninfo_unexecuted_blocks=1
00:04:00.332  		
00:04:00.332  		'
00:04:00.332   22:31:00 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR
00:04:00.332   22:31:00 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=31714
00:04:00.332   22:31:00 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 31714
00:04:00.332   22:31:00 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 31714 ']'
00:04:00.332   22:31:00 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:04:00.332   22:31:00 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:00.332   22:31:00 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:04:00.332   22:31:00 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:04:00.332  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:04:00.332   22:31:00 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:00.332   22:31:00 alias_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:00.332  [2024-12-10 22:31:01.055727] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:04:00.332  [2024-12-10 22:31:01.055881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid31714 ]
00:04:00.592  [2024-12-10 22:31:01.186919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:00.592  [2024-12-10 22:31:01.326390] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:04:01.987   22:31:02 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:01.987   22:31:02 alias_rpc -- common/autotest_common.sh@868 -- # return 0
00:04:01.987   22:31:02 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py load_config -i
00:04:01.987   22:31:02 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 31714
00:04:01.987   22:31:02 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 31714 ']'
00:04:01.987   22:31:02 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 31714
00:04:01.987    22:31:02 alias_rpc -- common/autotest_common.sh@959 -- # uname
00:04:01.987   22:31:02 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:01.987    22:31:02 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 31714
00:04:01.987   22:31:02 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:04:01.987   22:31:02 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:04:01.987   22:31:02 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 31714'
00:04:01.987  killing process with pid 31714
00:04:01.987   22:31:02 alias_rpc -- common/autotest_common.sh@973 -- # kill 31714
00:04:01.987   22:31:02 alias_rpc -- common/autotest_common.sh@978 -- # wait 31714
00:04:04.518  
00:04:04.518  real	0m4.462s
00:04:04.518  user	0m4.552s
00:04:04.518  sys	0m0.630s
00:04:04.518   22:31:05 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:04.518   22:31:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:04.518  ************************************
00:04:04.518  END TEST alias_rpc
00:04:04.518  ************************************
00:04:04.518   22:31:05  -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]]
00:04:04.518   22:31:05  -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/spdkcli/tcp.sh
00:04:04.518   22:31:05  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:04.518   22:31:05  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:04.776   22:31:05  -- common/autotest_common.sh@10 -- # set +x
00:04:04.776  ************************************
00:04:04.776  START TEST spdkcli_tcp
00:04:04.776  ************************************
00:04:04.776   22:31:05 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/spdkcli/tcp.sh
00:04:04.776  * Looking for test storage...
00:04:04.776  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/spdkcli
00:04:04.776    22:31:05 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:04:04.776     22:31:05 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version
00:04:04.776     22:31:05 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:04:04.776    22:31:05 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:04:04.776    22:31:05 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:04.776    22:31:05 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:04.776    22:31:05 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:04.776    22:31:05 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-:
00:04:04.776    22:31:05 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1
00:04:04.776    22:31:05 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-:
00:04:04.776    22:31:05 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2
00:04:04.776    22:31:05 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<'
00:04:04.776    22:31:05 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2
00:04:04.776    22:31:05 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1
00:04:04.776    22:31:05 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:04.776    22:31:05 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in
00:04:04.776    22:31:05 spdkcli_tcp -- scripts/common.sh@345 -- # : 1
00:04:04.776    22:31:05 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:04.776    22:31:05 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:04.776     22:31:05 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1
00:04:04.776     22:31:05 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1
00:04:04.776     22:31:05 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:04.776     22:31:05 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1
00:04:04.776    22:31:05 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1
00:04:04.776     22:31:05 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2
00:04:04.776     22:31:05 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2
00:04:04.776     22:31:05 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:04.776     22:31:05 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2
00:04:04.776    22:31:05 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2
00:04:04.776    22:31:05 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:04.776    22:31:05 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:04.776    22:31:05 spdkcli_tcp -- scripts/common.sh@368 -- # return 0
00:04:04.776    22:31:05 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:04.776    22:31:05 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:04:04.776  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:04.776  		--rc genhtml_branch_coverage=1
00:04:04.776  		--rc genhtml_function_coverage=1
00:04:04.776  		--rc genhtml_legend=1
00:04:04.776  		--rc geninfo_all_blocks=1
00:04:04.776  		--rc geninfo_unexecuted_blocks=1
00:04:04.776  		
00:04:04.776  		'
00:04:04.776    22:31:05 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:04:04.776  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:04.776  		--rc genhtml_branch_coverage=1
00:04:04.776  		--rc genhtml_function_coverage=1
00:04:04.776  		--rc genhtml_legend=1
00:04:04.776  		--rc geninfo_all_blocks=1
00:04:04.776  		--rc geninfo_unexecuted_blocks=1
00:04:04.776  		
00:04:04.776  		'
00:04:04.776    22:31:05 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:04:04.776  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:04.776  		--rc genhtml_branch_coverage=1
00:04:04.776  		--rc genhtml_function_coverage=1
00:04:04.776  		--rc genhtml_legend=1
00:04:04.776  		--rc geninfo_all_blocks=1
00:04:04.776  		--rc geninfo_unexecuted_blocks=1
00:04:04.776  		
00:04:04.776  		'
00:04:04.776    22:31:05 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:04:04.776  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:04.776  		--rc genhtml_branch_coverage=1
00:04:04.776  		--rc genhtml_function_coverage=1
00:04:04.776  		--rc genhtml_legend=1
00:04:04.776  		--rc geninfo_all_blocks=1
00:04:04.776  		--rc geninfo_unexecuted_blocks=1
00:04:04.776  		
00:04:04.776  		'
00:04:04.776   22:31:05 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/spdkcli/common.sh
00:04:04.776    22:31:05 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/spdkcli/spdkcli_job.py
00:04:04.776    22:31:05 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/clear_config.py
00:04:04.776   22:31:05 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1
00:04:04.776   22:31:05 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998
00:04:04.776   22:31:05 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT
00:04:04.776   22:31:05 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp
00:04:04.777   22:31:05 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable
00:04:04.777   22:31:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:04:04.777   22:31:05 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=32425
00:04:04.777   22:31:05 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0
00:04:04.777   22:31:05 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 32425
00:04:04.777   22:31:05 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 32425 ']'
00:04:04.777   22:31:05 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:04:04.777   22:31:05 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:04.777   22:31:05 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:04:04.777  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:04:04.777   22:31:05 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:04.777   22:31:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:04:04.777  [2024-12-10 22:31:05.539915] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:04:04.777  [2024-12-10 22:31:05.540027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid32425 ]
00:04:05.037  [2024-12-10 22:31:05.669114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:04:05.037  [2024-12-10 22:31:05.809890] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:04:05.037  [2024-12-10 22:31:05.809891] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:04:06.421   22:31:06 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:06.421   22:31:06 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0
00:04:06.421   22:31:06 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=32823
00:04:06.421   22:31:06 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock
00:04:06.421   22:31:06 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods
00:04:06.421  [
00:04:06.421    "bdev_malloc_delete",
00:04:06.421    "bdev_malloc_create",
00:04:06.421    "bdev_null_resize",
00:04:06.421    "bdev_null_delete",
00:04:06.421    "bdev_null_create",
00:04:06.421    "bdev_nvme_cuse_unregister",
00:04:06.421    "bdev_nvme_cuse_register",
00:04:06.421    "bdev_opal_new_user",
00:04:06.421    "bdev_opal_set_lock_state",
00:04:06.421    "bdev_opal_delete",
00:04:06.421    "bdev_opal_get_info",
00:04:06.421    "bdev_opal_create",
00:04:06.421    "bdev_nvme_opal_revert",
00:04:06.421    "bdev_nvme_opal_init",
00:04:06.421    "bdev_nvme_send_cmd",
00:04:06.421    "bdev_nvme_set_keys",
00:04:06.421    "bdev_nvme_get_path_iostat",
00:04:06.421    "bdev_nvme_get_mdns_discovery_info",
00:04:06.421    "bdev_nvme_stop_mdns_discovery",
00:04:06.421    "bdev_nvme_start_mdns_discovery",
00:04:06.421    "bdev_nvme_set_multipath_policy",
00:04:06.421    "bdev_nvme_set_preferred_path",
00:04:06.421    "bdev_nvme_get_io_paths",
00:04:06.421    "bdev_nvme_remove_error_injection",
00:04:06.421    "bdev_nvme_add_error_injection",
00:04:06.421    "bdev_nvme_get_discovery_info",
00:04:06.421    "bdev_nvme_stop_discovery",
00:04:06.421    "bdev_nvme_start_discovery",
00:04:06.421    "bdev_nvme_get_controller_health_info",
00:04:06.421    "bdev_nvme_disable_controller",
00:04:06.421    "bdev_nvme_enable_controller",
00:04:06.421    "bdev_nvme_reset_controller",
00:04:06.421    "bdev_nvme_get_transport_statistics",
00:04:06.421    "bdev_nvme_apply_firmware",
00:04:06.421    "bdev_nvme_detach_controller",
00:04:06.421    "bdev_nvme_get_controllers",
00:04:06.421    "bdev_nvme_attach_controller",
00:04:06.421    "bdev_nvme_set_hotplug",
00:04:06.421    "bdev_nvme_set_options",
00:04:06.421    "bdev_passthru_delete",
00:04:06.421    "bdev_passthru_create",
00:04:06.421    "bdev_lvol_set_parent_bdev",
00:04:06.421    "bdev_lvol_set_parent",
00:04:06.421    "bdev_lvol_check_shallow_copy",
00:04:06.421    "bdev_lvol_start_shallow_copy",
00:04:06.421    "bdev_lvol_grow_lvstore",
00:04:06.421    "bdev_lvol_get_lvols",
00:04:06.421    "bdev_lvol_get_lvstores",
00:04:06.421    "bdev_lvol_delete",
00:04:06.421    "bdev_lvol_set_read_only",
00:04:06.421    "bdev_lvol_resize",
00:04:06.421    "bdev_lvol_decouple_parent",
00:04:06.421    "bdev_lvol_inflate",
00:04:06.421    "bdev_lvol_rename",
00:04:06.421    "bdev_lvol_clone_bdev",
00:04:06.421    "bdev_lvol_clone",
00:04:06.421    "bdev_lvol_snapshot",
00:04:06.421    "bdev_lvol_create",
00:04:06.421    "bdev_lvol_delete_lvstore",
00:04:06.421    "bdev_lvol_rename_lvstore",
00:04:06.421    "bdev_lvol_create_lvstore",
00:04:06.421    "bdev_raid_set_options",
00:04:06.421    "bdev_raid_remove_base_bdev",
00:04:06.421    "bdev_raid_add_base_bdev",
00:04:06.421    "bdev_raid_delete",
00:04:06.421    "bdev_raid_create",
00:04:06.421    "bdev_raid_get_bdevs",
00:04:06.421    "bdev_error_inject_error",
00:04:06.421    "bdev_error_delete",
00:04:06.421    "bdev_error_create",
00:04:06.421    "bdev_split_delete",
00:04:06.421    "bdev_split_create",
00:04:06.421    "bdev_delay_delete",
00:04:06.421    "bdev_delay_create",
00:04:06.421    "bdev_delay_update_latency",
00:04:06.421    "bdev_zone_block_delete",
00:04:06.421    "bdev_zone_block_create",
00:04:06.421    "blobfs_create",
00:04:06.421    "blobfs_detect",
00:04:06.421    "blobfs_set_cache_size",
00:04:06.421    "bdev_crypto_delete",
00:04:06.421    "bdev_crypto_create",
00:04:06.422    "bdev_aio_delete",
00:04:06.422    "bdev_aio_rescan",
00:04:06.422    "bdev_aio_create",
00:04:06.422    "bdev_ftl_set_property",
00:04:06.422    "bdev_ftl_get_properties",
00:04:06.422    "bdev_ftl_get_stats",
00:04:06.422    "bdev_ftl_unmap",
00:04:06.422    "bdev_ftl_unload",
00:04:06.422    "bdev_ftl_delete",
00:04:06.422    "bdev_ftl_load",
00:04:06.422    "bdev_ftl_create",
00:04:06.422    "bdev_virtio_attach_controller",
00:04:06.422    "bdev_virtio_scsi_get_devices",
00:04:06.422    "bdev_virtio_detach_controller",
00:04:06.422    "bdev_virtio_blk_set_hotplug",
00:04:06.422    "bdev_iscsi_delete",
00:04:06.422    "bdev_iscsi_create",
00:04:06.422    "bdev_iscsi_set_options",
00:04:06.422    "accel_error_inject_error",
00:04:06.422    "ioat_scan_accel_module",
00:04:06.422    "dsa_scan_accel_module",
00:04:06.422    "iaa_scan_accel_module",
00:04:06.422    "dpdk_cryptodev_get_driver",
00:04:06.422    "dpdk_cryptodev_set_driver",
00:04:06.422    "dpdk_cryptodev_scan_accel_module",
00:04:06.422    "vfu_virtio_create_fs_endpoint",
00:04:06.422    "vfu_virtio_create_scsi_endpoint",
00:04:06.422    "vfu_virtio_scsi_remove_target",
00:04:06.422    "vfu_virtio_scsi_add_target",
00:04:06.422    "vfu_virtio_create_blk_endpoint",
00:04:06.422    "vfu_virtio_delete_endpoint",
00:04:06.422    "keyring_file_remove_key",
00:04:06.422    "keyring_file_add_key",
00:04:06.422    "keyring_linux_set_options",
00:04:06.422    "fsdev_aio_delete",
00:04:06.422    "fsdev_aio_create",
00:04:06.422    "iscsi_get_histogram",
00:04:06.422    "iscsi_enable_histogram",
00:04:06.422    "iscsi_set_options",
00:04:06.422    "iscsi_get_auth_groups",
00:04:06.422    "iscsi_auth_group_remove_secret",
00:04:06.422    "iscsi_auth_group_add_secret",
00:04:06.422    "iscsi_delete_auth_group",
00:04:06.422    "iscsi_create_auth_group",
00:04:06.422    "iscsi_set_discovery_auth",
00:04:06.422    "iscsi_get_options",
00:04:06.422    "iscsi_target_node_request_logout",
00:04:06.422    "iscsi_target_node_set_redirect",
00:04:06.422    "iscsi_target_node_set_auth",
00:04:06.422    "iscsi_target_node_add_lun",
00:04:06.422    "iscsi_get_stats",
00:04:06.422    "iscsi_get_connections",
00:04:06.422    "iscsi_portal_group_set_auth",
00:04:06.422    "iscsi_start_portal_group",
00:04:06.422    "iscsi_delete_portal_group",
00:04:06.422    "iscsi_create_portal_group",
00:04:06.422    "iscsi_get_portal_groups",
00:04:06.422    "iscsi_delete_target_node",
00:04:06.422    "iscsi_target_node_remove_pg_ig_maps",
00:04:06.422    "iscsi_target_node_add_pg_ig_maps",
00:04:06.422    "iscsi_create_target_node",
00:04:06.422    "iscsi_get_target_nodes",
00:04:06.422    "iscsi_delete_initiator_group",
00:04:06.422    "iscsi_initiator_group_remove_initiators",
00:04:06.422    "iscsi_initiator_group_add_initiators",
00:04:06.422    "iscsi_create_initiator_group",
00:04:06.422    "iscsi_get_initiator_groups",
00:04:06.422    "nvmf_set_crdt",
00:04:06.422    "nvmf_set_config",
00:04:06.422    "nvmf_set_max_subsystems",
00:04:06.422    "nvmf_stop_mdns_prr",
00:04:06.422    "nvmf_publish_mdns_prr",
00:04:06.422    "nvmf_subsystem_get_listeners",
00:04:06.422    "nvmf_subsystem_get_qpairs",
00:04:06.422    "nvmf_subsystem_get_controllers",
00:04:06.422    "nvmf_get_stats",
00:04:06.422    "nvmf_get_transports",
00:04:06.422    "nvmf_create_transport",
00:04:06.422    "nvmf_get_targets",
00:04:06.422    "nvmf_delete_target",
00:04:06.422    "nvmf_create_target",
00:04:06.422    "nvmf_subsystem_allow_any_host",
00:04:06.422    "nvmf_subsystem_set_keys",
00:04:06.422    "nvmf_subsystem_remove_host",
00:04:06.422    "nvmf_subsystem_add_host",
00:04:06.422    "nvmf_ns_remove_host",
00:04:06.422    "nvmf_ns_add_host",
00:04:06.422    "nvmf_subsystem_remove_ns",
00:04:06.422    "nvmf_subsystem_set_ns_ana_group",
00:04:06.422    "nvmf_subsystem_add_ns",
00:04:06.422    "nvmf_subsystem_listener_set_ana_state",
00:04:06.422    "nvmf_discovery_get_referrals",
00:04:06.422    "nvmf_discovery_remove_referral",
00:04:06.422    "nvmf_discovery_add_referral",
00:04:06.422    "nvmf_subsystem_remove_listener",
00:04:06.422    "nvmf_subsystem_add_listener",
00:04:06.422    "nvmf_delete_subsystem",
00:04:06.422    "nvmf_create_subsystem",
00:04:06.422    "nvmf_get_subsystems",
00:04:06.422    "env_dpdk_get_mem_stats",
00:04:06.422    "nbd_get_disks",
00:04:06.422    "nbd_stop_disk",
00:04:06.422    "nbd_start_disk",
00:04:06.422    "ublk_recover_disk",
00:04:06.422    "ublk_get_disks",
00:04:06.422    "ublk_stop_disk",
00:04:06.422    "ublk_start_disk",
00:04:06.422    "ublk_destroy_target",
00:04:06.422    "ublk_create_target",
00:04:06.422    "virtio_blk_create_transport",
00:04:06.422    "virtio_blk_get_transports",
00:04:06.422    "vhost_controller_set_coalescing",
00:04:06.422    "vhost_get_controllers",
00:04:06.422    "vhost_delete_controller",
00:04:06.422    "vhost_create_blk_controller",
00:04:06.422    "vhost_scsi_controller_remove_target",
00:04:06.422    "vhost_scsi_controller_add_target",
00:04:06.422    "vhost_start_scsi_controller",
00:04:06.422    "vhost_create_scsi_controller",
00:04:06.422    "thread_set_cpumask",
00:04:06.422    "scheduler_set_options",
00:04:06.422    "framework_get_governor",
00:04:06.422    "framework_get_scheduler",
00:04:06.422    "framework_set_scheduler",
00:04:06.422    "framework_get_reactors",
00:04:06.422    "thread_get_io_channels",
00:04:06.422    "thread_get_pollers",
00:04:06.422    "thread_get_stats",
00:04:06.422    "framework_monitor_context_switch",
00:04:06.422    "spdk_kill_instance",
00:04:06.422    "log_enable_timestamps",
00:04:06.422    "log_get_flags",
00:04:06.422    "log_clear_flag",
00:04:06.422    "log_set_flag",
00:04:06.422    "log_get_level",
00:04:06.422    "log_set_level",
00:04:06.422    "log_get_print_level",
00:04:06.422    "log_set_print_level",
00:04:06.422    "framework_enable_cpumask_locks",
00:04:06.422    "framework_disable_cpumask_locks",
00:04:06.422    "framework_wait_init",
00:04:06.422    "framework_start_init",
00:04:06.422    "scsi_get_devices",
00:04:06.422    "bdev_get_histogram",
00:04:06.422    "bdev_enable_histogram",
00:04:06.422    "bdev_set_qos_limit",
00:04:06.422    "bdev_set_qd_sampling_period",
00:04:06.422    "bdev_get_bdevs",
00:04:06.422    "bdev_reset_iostat",
00:04:06.422    "bdev_get_iostat",
00:04:06.422    "bdev_examine",
00:04:06.422    "bdev_wait_for_examine",
00:04:06.422    "bdev_set_options",
00:04:06.422    "accel_get_stats",
00:04:06.422    "accel_set_options",
00:04:06.422    "accel_set_driver",
00:04:06.422    "accel_crypto_key_destroy",
00:04:06.422    "accel_crypto_keys_get",
00:04:06.422    "accel_crypto_key_create",
00:04:06.422    "accel_assign_opc",
00:04:06.422    "accel_get_module_info",
00:04:06.422    "accel_get_opc_assignments",
00:04:06.422    "vmd_rescan",
00:04:06.422    "vmd_remove_device",
00:04:06.422    "vmd_enable",
00:04:06.422    "sock_get_default_impl",
00:04:06.422    "sock_set_default_impl",
00:04:06.422    "sock_impl_set_options",
00:04:06.422    "sock_impl_get_options",
00:04:06.422    "iobuf_get_stats",
00:04:06.422    "iobuf_set_options",
00:04:06.422    "keyring_get_keys",
00:04:06.422    "vfu_tgt_set_base_path",
00:04:06.422    "framework_get_pci_devices",
00:04:06.422    "framework_get_config",
00:04:06.422    "framework_get_subsystems",
00:04:06.422    "fsdev_set_opts",
00:04:06.422    "fsdev_get_opts",
00:04:06.422    "trace_get_info",
00:04:06.422    "trace_get_tpoint_group_mask",
00:04:06.422    "trace_disable_tpoint_group",
00:04:06.422    "trace_enable_tpoint_group",
00:04:06.422    "trace_clear_tpoint_mask",
00:04:06.422    "trace_set_tpoint_mask",
00:04:06.422    "notify_get_notifications",
00:04:06.422    "notify_get_types",
00:04:06.422    "spdk_get_version",
00:04:06.422    "rpc_get_methods"
00:04:06.422  ]
00:04:06.422   22:31:07 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp
00:04:06.422   22:31:07 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable
00:04:06.422   22:31:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:04:06.422   22:31:07 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT
00:04:06.422   22:31:07 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 32425
00:04:06.422   22:31:07 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 32425 ']'
00:04:06.422   22:31:07 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 32425
00:04:06.422    22:31:07 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname
00:04:06.422   22:31:07 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:06.422    22:31:07 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 32425
00:04:06.422   22:31:07 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:04:06.422   22:31:07 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:04:06.422   22:31:07 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 32425'
00:04:06.422  killing process with pid 32425
00:04:06.422   22:31:07 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 32425
00:04:06.422   22:31:07 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 32425
00:04:09.712  
00:04:09.712  real	0m4.442s
00:04:09.712  user	0m8.071s
00:04:09.712  sys	0m0.643s
00:04:09.712   22:31:09 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:09.712   22:31:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:04:09.712  ************************************
00:04:09.712  END TEST spdkcli_tcp
00:04:09.712  ************************************
00:04:09.712   22:31:09  -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:04:09.712   22:31:09  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:09.712   22:31:09  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:09.712   22:31:09  -- common/autotest_common.sh@10 -- # set +x
00:04:09.712  ************************************
00:04:09.712  START TEST dpdk_mem_utility
00:04:09.712  ************************************
00:04:09.712   22:31:09 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:04:09.712  * Looking for test storage...
00:04:09.712  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/dpdk_memory_utility
00:04:09.712    22:31:09 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:04:09.712     22:31:09 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version
00:04:09.712     22:31:09 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:04:09.712    22:31:09 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:04:09.712    22:31:09 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:09.712    22:31:09 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:09.712    22:31:09 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:09.712    22:31:09 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-:
00:04:09.712    22:31:09 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1
00:04:09.712    22:31:09 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-:
00:04:09.712    22:31:09 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2
00:04:09.712    22:31:09 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<'
00:04:09.712    22:31:09 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2
00:04:09.712    22:31:09 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1
00:04:09.712    22:31:09 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:09.712    22:31:09 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in
00:04:09.712    22:31:09 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1
00:04:09.712    22:31:09 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:09.712    22:31:09 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:09.712     22:31:09 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1
00:04:09.712     22:31:09 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1
00:04:09.712     22:31:09 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:09.712     22:31:09 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1
00:04:09.712    22:31:09 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1
00:04:09.712     22:31:09 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2
00:04:09.712     22:31:09 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2
00:04:09.712     22:31:09 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:09.712     22:31:09 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2
00:04:09.712    22:31:09 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2
00:04:09.712    22:31:09 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:09.712    22:31:09 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:09.712    22:31:09 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0
00:04:09.712    22:31:09 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:09.712    22:31:09 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:04:09.712  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:09.712  		--rc genhtml_branch_coverage=1
00:04:09.712  		--rc genhtml_function_coverage=1
00:04:09.712  		--rc genhtml_legend=1
00:04:09.712  		--rc geninfo_all_blocks=1
00:04:09.712  		--rc geninfo_unexecuted_blocks=1
00:04:09.712  		
00:04:09.712  		'
00:04:09.712    22:31:09 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:04:09.712  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:09.712  		--rc genhtml_branch_coverage=1
00:04:09.712  		--rc genhtml_function_coverage=1
00:04:09.712  		--rc genhtml_legend=1
00:04:09.712  		--rc geninfo_all_blocks=1
00:04:09.712  		--rc geninfo_unexecuted_blocks=1
00:04:09.712  		
00:04:09.712  		'
00:04:09.712    22:31:09 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:04:09.712  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:09.712  		--rc genhtml_branch_coverage=1
00:04:09.712  		--rc genhtml_function_coverage=1
00:04:09.712  		--rc genhtml_legend=1
00:04:09.712  		--rc geninfo_all_blocks=1
00:04:09.712  		--rc geninfo_unexecuted_blocks=1
00:04:09.712  		
00:04:09.712  		'
00:04:09.712    22:31:09 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:04:09.712  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:09.712  		--rc genhtml_branch_coverage=1
00:04:09.712  		--rc genhtml_function_coverage=1
00:04:09.712  		--rc genhtml_legend=1
00:04:09.712  		--rc geninfo_all_blocks=1
00:04:09.712  		--rc geninfo_unexecuted_blocks=1
00:04:09.712  		
00:04:09.712  		'
00:04:09.712   22:31:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/dpdk_mem_info.py
00:04:09.712   22:31:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=33319
00:04:09.712   22:31:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:04:09.712   22:31:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 33319
00:04:09.712   22:31:09 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 33319 ']'
00:04:09.712   22:31:09 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:04:09.712   22:31:09 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:09.712   22:31:09 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:04:09.712  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:04:09.712   22:31:09 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:09.712   22:31:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:04:09.712  [2024-12-10 22:31:10.043713] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:04:09.712  [2024-12-10 22:31:10.043849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid33319 ]
00:04:09.712  [2024-12-10 22:31:10.191847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:09.712  [2024-12-10 22:31:10.339655] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:04:10.646   22:31:11 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:10.646   22:31:11 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0
00:04:10.646   22:31:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT
00:04:10.646   22:31:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats
00:04:10.646   22:31:11 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:10.646   22:31:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:04:10.646  {
00:04:10.646  "filename": "/tmp/spdk_mem_dump.txt"
00:04:10.646  }
00:04:10.646   22:31:11 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:10.646   22:31:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/dpdk_mem_info.py
00:04:10.646  DPDK memory size 824.000000 MiB in 1 heap(s)
00:04:10.646  1 heaps totaling size 824.000000 MiB
00:04:10.646    size:  824.000000 MiB heap id: 0
00:04:10.646  end heaps----------
00:04:10.646  9 mempools totaling size 603.782043 MiB
00:04:10.646    size:  212.674988 MiB name: PDU_immediate_data_Pool
00:04:10.646    size:  158.602051 MiB name: PDU_data_out_Pool
00:04:10.646    size:  100.555481 MiB name: bdev_io_33319
00:04:10.646    size:   50.003479 MiB name: msgpool_33319
00:04:10.646    size:   36.509338 MiB name: fsdev_io_33319
00:04:10.646    size:   21.763794 MiB name: PDU_Pool
00:04:10.646    size:   19.513306 MiB name: SCSI_TASK_Pool
00:04:10.646    size:    4.133484 MiB name: evtpool_33319
00:04:10.646    size:    0.026123 MiB name: Session_Pool
00:04:10.646  end mempools-------
00:04:10.646  6 memzones totaling size 4.142822 MiB
00:04:10.646    size:    1.000366 MiB name: RG_ring_0_33319
00:04:10.646    size:    1.000366 MiB name: RG_ring_1_33319
00:04:10.646    size:    1.000366 MiB name: RG_ring_4_33319
00:04:10.646    size:    1.000366 MiB name: RG_ring_5_33319
00:04:10.646    size:    0.125366 MiB name: RG_ring_2_33319
00:04:10.646    size:    0.015991 MiB name: RG_ring_3_33319
00:04:10.646  end memzones-------
00:04:10.646   22:31:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0
00:04:10.905  heap id: 0 total size: 824.000000 MiB number of busy elements: 44 number of free elements: 19
00:04:10.905    list of free elements. size: 16.847595 MiB
00:04:10.905      element at address: 0x200006400000 with size:    1.995972 MiB
00:04:10.905      element at address: 0x20000a600000 with size:    1.995972 MiB
00:04:10.905      element at address: 0x200003e00000 with size:    1.991028 MiB
00:04:10.905      element at address: 0x200019500040 with size:    0.999939 MiB
00:04:10.905      element at address: 0x200019900040 with size:    0.999939 MiB
00:04:10.905      element at address: 0x200019a00000 with size:    0.999329 MiB
00:04:10.905      element at address: 0x200000400000 with size:    0.998108 MiB
00:04:10.905      element at address: 0x200032600000 with size:    0.994324 MiB
00:04:10.905      element at address: 0x200019200000 with size:    0.959900 MiB
00:04:10.905      element at address: 0x200019d00040 with size:    0.937256 MiB
00:04:10.905      element at address: 0x200000200000 with size:    0.716980 MiB
00:04:10.905      element at address: 0x20001b400000 with size:    0.583191 MiB
00:04:10.905      element at address: 0x200000c00000 with size:    0.495300 MiB
00:04:10.905      element at address: 0x200019600000 with size:    0.491150 MiB
00:04:10.905      element at address: 0x200019e00000 with size:    0.485657 MiB
00:04:10.905      element at address: 0x200012c00000 with size:    0.436157 MiB
00:04:10.905      element at address: 0x200028800000 with size:    0.411072 MiB
00:04:10.905      element at address: 0x200000800000 with size:    0.355286 MiB
00:04:10.905      element at address: 0x20000a5ff040 with size:    0.001038 MiB
00:04:10.905    list of standard malloc elements. size: 199.221497 MiB
00:04:10.905      element at address: 0x20000a7fef80 with size:  132.000183 MiB
00:04:10.905      element at address: 0x2000065fef80 with size:   64.000183 MiB
00:04:10.905      element at address: 0x2000193fff80 with size:    1.000183 MiB
00:04:10.905      element at address: 0x2000197fff80 with size:    1.000183 MiB
00:04:10.905      element at address: 0x200019bfff80 with size:    1.000183 MiB
00:04:10.905      element at address: 0x2000003d9e80 with size:    0.140808 MiB
00:04:10.905      element at address: 0x200019deff40 with size:    0.062683 MiB
00:04:10.905      element at address: 0x2000003fdf40 with size:    0.007996 MiB
00:04:10.905      element at address: 0x200012bff040 with size:    0.000427 MiB
00:04:10.905      element at address: 0x200012bffa00 with size:    0.000366 MiB
00:04:10.905      element at address: 0x2000002d7b00 with size:    0.000244 MiB
00:04:10.905      element at address: 0x2000003d9d80 with size:    0.000244 MiB
00:04:10.905      element at address: 0x2000004ff840 with size:    0.000244 MiB
00:04:10.905      element at address: 0x2000004ff940 with size:    0.000244 MiB
00:04:10.905      element at address: 0x2000004ffa40 with size:    0.000244 MiB
00:04:10.905      element at address: 0x2000004ffcc0 with size:    0.000244 MiB
00:04:10.905      element at address: 0x2000004ffdc0 with size:    0.000244 MiB
00:04:10.905      element at address: 0x20000087f3c0 with size:    0.000244 MiB
00:04:10.905      element at address: 0x20000087f4c0 with size:    0.000244 MiB
00:04:10.905      element at address: 0x2000008ff800 with size:    0.000244 MiB
00:04:10.905      element at address: 0x2000008ffa80 with size:    0.000244 MiB
00:04:10.905      element at address: 0x200000cfef00 with size:    0.000244 MiB
00:04:10.905      element at address: 0x200000cff000 with size:    0.000244 MiB
00:04:10.905      element at address: 0x20000a5ff480 with size:    0.000244 MiB
00:04:10.905      element at address: 0x20000a5ff580 with size:    0.000244 MiB
00:04:10.905      element at address: 0x20000a5ff680 with size:    0.000244 MiB
00:04:10.905      element at address: 0x20000a5ff780 with size:    0.000244 MiB
00:04:10.905      element at address: 0x20000a5ff880 with size:    0.000244 MiB
00:04:10.905      element at address: 0x20000a5ff980 with size:    0.000244 MiB
00:04:10.905      element at address: 0x20000a5ffc00 with size:    0.000244 MiB
00:04:10.905      element at address: 0x20000a5ffd00 with size:    0.000244 MiB
00:04:10.905      element at address: 0x20000a5ffe00 with size:    0.000244 MiB
00:04:10.905      element at address: 0x20000a5fff00 with size:    0.000244 MiB
00:04:10.905      element at address: 0x200012bff200 with size:    0.000244 MiB
00:04:10.905      element at address: 0x200012bff300 with size:    0.000244 MiB
00:04:10.905      element at address: 0x200012bff400 with size:    0.000244 MiB
00:04:10.905      element at address: 0x200012bff500 with size:    0.000244 MiB
00:04:10.905      element at address: 0x200012bff600 with size:    0.000244 MiB
00:04:10.905      element at address: 0x200012bff700 with size:    0.000244 MiB
00:04:10.905      element at address: 0x200012bff800 with size:    0.000244 MiB
00:04:10.905      element at address: 0x200012bff900 with size:    0.000244 MiB
00:04:10.905      element at address: 0x200012bffb80 with size:    0.000244 MiB
00:04:10.905      element at address: 0x200012bffc80 with size:    0.000244 MiB
00:04:10.905      element at address: 0x200012bfff00 with size:    0.000244 MiB
00:04:10.905    list of memzone associated elements. size: 607.930908 MiB
00:04:10.905      element at address: 0x20001b4954c0 with size:  211.416809 MiB
00:04:10.905        associated memzone info: size:  211.416626 MiB name: MP_PDU_immediate_data_Pool_0
00:04:10.905      element at address: 0x20002886ff80 with size:  157.562622 MiB
00:04:10.905        associated memzone info: size:  157.562439 MiB name: MP_PDU_data_out_Pool_0
00:04:10.905      element at address: 0x200012df1e40 with size:  100.055115 MiB
00:04:10.905        associated memzone info: size:  100.054932 MiB name: MP_bdev_io_33319_0
00:04:10.905      element at address: 0x200000dff340 with size:   48.003113 MiB
00:04:10.905        associated memzone info: size:   48.002930 MiB name: MP_msgpool_33319_0
00:04:10.905      element at address: 0x200003ffdb40 with size:   36.008972 MiB
00:04:10.905        associated memzone info: size:   36.008789 MiB name: MP_fsdev_io_33319_0
00:04:10.905      element at address: 0x200019fbe900 with size:   20.255615 MiB
00:04:10.905        associated memzone info: size:   20.255432 MiB name: MP_PDU_Pool_0
00:04:10.905      element at address: 0x2000327feb00 with size:   18.005127 MiB
00:04:10.905        associated memzone info: size:   18.004944 MiB name: MP_SCSI_TASK_Pool_0
00:04:10.905      element at address: 0x2000004ffec0 with size:    3.000305 MiB
00:04:10.905        associated memzone info: size:    3.000122 MiB name: MP_evtpool_33319_0
00:04:10.905      element at address: 0x2000009ffdc0 with size:    2.000549 MiB
00:04:10.905        associated memzone info: size:    2.000366 MiB name: RG_MP_msgpool_33319
00:04:10.905      element at address: 0x2000002d7c00 with size:    1.008179 MiB
00:04:10.905        associated memzone info: size:    1.007996 MiB name: MP_evtpool_33319
00:04:10.905      element at address: 0x2000196fde00 with size:    1.008179 MiB
00:04:10.905        associated memzone info: size:    1.007996 MiB name: MP_PDU_Pool
00:04:10.905      element at address: 0x200019ebc780 with size:    1.008179 MiB
00:04:10.905        associated memzone info: size:    1.007996 MiB name: MP_PDU_immediate_data_Pool
00:04:10.905      element at address: 0x2000192fde00 with size:    1.008179 MiB
00:04:10.905        associated memzone info: size:    1.007996 MiB name: MP_PDU_data_out_Pool
00:04:10.905      element at address: 0x200012cefcc0 with size:    1.008179 MiB
00:04:10.905        associated memzone info: size:    1.007996 MiB name: MP_SCSI_TASK_Pool
00:04:10.905      element at address: 0x200000cff100 with size:    1.000549 MiB
00:04:10.905        associated memzone info: size:    1.000366 MiB name: RG_ring_0_33319
00:04:10.905      element at address: 0x2000008ffb80 with size:    1.000549 MiB
00:04:10.905        associated memzone info: size:    1.000366 MiB name: RG_ring_1_33319
00:04:10.905      element at address: 0x200019affd40 with size:    1.000549 MiB
00:04:10.905        associated memzone info: size:    1.000366 MiB name: RG_ring_4_33319
00:04:10.905      element at address: 0x2000326fe8c0 with size:    1.000549 MiB
00:04:10.905        associated memzone info: size:    1.000366 MiB name: RG_ring_5_33319
00:04:10.905      element at address: 0x20000087f5c0 with size:    0.500549 MiB
00:04:10.905        associated memzone info: size:    0.500366 MiB name: RG_MP_fsdev_io_33319
00:04:10.905      element at address: 0x200000c7ecc0 with size:    0.500549 MiB
00:04:10.905        associated memzone info: size:    0.500366 MiB name: RG_MP_bdev_io_33319
00:04:10.905      element at address: 0x20001967dbc0 with size:    0.500549 MiB
00:04:10.905        associated memzone info: size:    0.500366 MiB name: RG_MP_PDU_Pool
00:04:10.905      element at address: 0x200012c6fa80 with size:    0.500549 MiB
00:04:10.905        associated memzone info: size:    0.500366 MiB name: RG_MP_SCSI_TASK_Pool
00:04:10.905      element at address: 0x200019e7c540 with size:    0.250549 MiB
00:04:10.905        associated memzone info: size:    0.250366 MiB name: RG_MP_PDU_immediate_data_Pool
00:04:10.905      element at address: 0x2000002b78c0 with size:    0.125549 MiB
00:04:10.905        associated memzone info: size:    0.125366 MiB name: RG_MP_evtpool_33319
00:04:10.905      element at address: 0x20000085f180 with size:    0.125549 MiB
00:04:10.905        associated memzone info: size:    0.125366 MiB name: RG_ring_2_33319
00:04:10.905      element at address: 0x2000192f5bc0 with size:    0.031799 MiB
00:04:10.905        associated memzone info: size:    0.031616 MiB name: RG_MP_PDU_data_out_Pool
00:04:10.905      element at address: 0x2000288693c0 with size:    0.023804 MiB
00:04:10.905        associated memzone info: size:    0.023621 MiB name: MP_Session_Pool_0
00:04:10.905      element at address: 0x20000085af40 with size:    0.016174 MiB
00:04:10.905        associated memzone info: size:    0.015991 MiB name: RG_ring_3_33319
00:04:10.905      element at address: 0x20002886f540 with size:    0.002502 MiB
00:04:10.905        associated memzone info: size:    0.002319 MiB name: RG_MP_Session_Pool
00:04:10.905      element at address: 0x2000004ffb40 with size:    0.000366 MiB
00:04:10.905        associated memzone info: size:    0.000183 MiB name: MP_msgpool_33319
00:04:10.905      element at address: 0x2000008ff900 with size:    0.000366 MiB
00:04:10.905        associated memzone info: size:    0.000183 MiB name: MP_fsdev_io_33319
00:04:10.905      element at address: 0x200012bffd80 with size:    0.000366 MiB
00:04:10.905        associated memzone info: size:    0.000183 MiB name: MP_bdev_io_33319
00:04:10.905      element at address: 0x20000a5ffa80 with size:    0.000366 MiB
00:04:10.905        associated memzone info: size:    0.000183 MiB name: MP_Session_Pool
00:04:10.905   22:31:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT
00:04:10.905   22:31:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 33319
00:04:10.905   22:31:11 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 33319 ']'
00:04:10.905   22:31:11 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 33319
00:04:10.905    22:31:11 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname
00:04:10.905   22:31:11 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:10.905    22:31:11 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 33319
00:04:10.905   22:31:11 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:04:10.905   22:31:11 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:04:10.905   22:31:11 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 33319'
00:04:10.905  killing process with pid 33319
00:04:10.905   22:31:11 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 33319
00:04:10.905   22:31:11 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 33319
00:04:13.435  
00:04:13.435  real	0m4.349s
00:04:13.435  user	0m4.268s
00:04:13.435  sys	0m0.631s
00:04:13.435   22:31:14 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:13.435   22:31:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:04:13.435  ************************************
00:04:13.435  END TEST dpdk_mem_utility
00:04:13.435  ************************************
00:04:13.435   22:31:14  -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/event.sh
00:04:13.435   22:31:14  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:13.435   22:31:14  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:13.435   22:31:14  -- common/autotest_common.sh@10 -- # set +x
00:04:13.435  ************************************
00:04:13.435  START TEST event
00:04:13.435  ************************************
00:04:13.435   22:31:14 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/event.sh
00:04:13.693  * Looking for test storage...
00:04:13.693  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event
00:04:13.693    22:31:14 event -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:04:13.693     22:31:14 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:04:13.693     22:31:14 event -- common/autotest_common.sh@1711 -- # lcov --version
00:04:13.693    22:31:14 event -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:04:13.693    22:31:14 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:13.693    22:31:14 event -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:13.693    22:31:14 event -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:13.693    22:31:14 event -- scripts/common.sh@336 -- # IFS=.-:
00:04:13.693    22:31:14 event -- scripts/common.sh@336 -- # read -ra ver1
00:04:13.693    22:31:14 event -- scripts/common.sh@337 -- # IFS=.-:
00:04:13.693    22:31:14 event -- scripts/common.sh@337 -- # read -ra ver2
00:04:13.693    22:31:14 event -- scripts/common.sh@338 -- # local 'op=<'
00:04:13.693    22:31:14 event -- scripts/common.sh@340 -- # ver1_l=2
00:04:13.693    22:31:14 event -- scripts/common.sh@341 -- # ver2_l=1
00:04:13.693    22:31:14 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:13.693    22:31:14 event -- scripts/common.sh@344 -- # case "$op" in
00:04:13.693    22:31:14 event -- scripts/common.sh@345 -- # : 1
00:04:13.693    22:31:14 event -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:13.693    22:31:14 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:13.693     22:31:14 event -- scripts/common.sh@365 -- # decimal 1
00:04:13.693     22:31:14 event -- scripts/common.sh@353 -- # local d=1
00:04:13.693     22:31:14 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:13.693     22:31:14 event -- scripts/common.sh@355 -- # echo 1
00:04:13.693    22:31:14 event -- scripts/common.sh@365 -- # ver1[v]=1
00:04:13.693     22:31:14 event -- scripts/common.sh@366 -- # decimal 2
00:04:13.693     22:31:14 event -- scripts/common.sh@353 -- # local d=2
00:04:13.693     22:31:14 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:13.693     22:31:14 event -- scripts/common.sh@355 -- # echo 2
00:04:13.693    22:31:14 event -- scripts/common.sh@366 -- # ver2[v]=2
00:04:13.693    22:31:14 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:13.693    22:31:14 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:13.693    22:31:14 event -- scripts/common.sh@368 -- # return 0
00:04:13.693    22:31:14 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:13.693    22:31:14 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:04:13.693  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:13.693  		--rc genhtml_branch_coverage=1
00:04:13.693  		--rc genhtml_function_coverage=1
00:04:13.693  		--rc genhtml_legend=1
00:04:13.693  		--rc geninfo_all_blocks=1
00:04:13.693  		--rc geninfo_unexecuted_blocks=1
00:04:13.693  		
00:04:13.693  		'
00:04:13.693    22:31:14 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:04:13.693  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:13.693  		--rc genhtml_branch_coverage=1
00:04:13.693  		--rc genhtml_function_coverage=1
00:04:13.693  		--rc genhtml_legend=1
00:04:13.693  		--rc geninfo_all_blocks=1
00:04:13.693  		--rc geninfo_unexecuted_blocks=1
00:04:13.693  		
00:04:13.693  		'
00:04:13.693    22:31:14 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:04:13.693  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:13.693  		--rc genhtml_branch_coverage=1
00:04:13.693  		--rc genhtml_function_coverage=1
00:04:13.693  		--rc genhtml_legend=1
00:04:13.693  		--rc geninfo_all_blocks=1
00:04:13.693  		--rc geninfo_unexecuted_blocks=1
00:04:13.693  		
00:04:13.693  		'
00:04:13.693    22:31:14 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:04:13.693  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:13.693  		--rc genhtml_branch_coverage=1
00:04:13.693  		--rc genhtml_function_coverage=1
00:04:13.693  		--rc genhtml_legend=1
00:04:13.693  		--rc geninfo_all_blocks=1
00:04:13.693  		--rc geninfo_unexecuted_blocks=1
00:04:13.693  		
00:04:13.693  		'
00:04:13.693   22:31:14 event -- event/event.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/bdev/nbd_common.sh
00:04:13.693    22:31:14 event -- bdev/nbd_common.sh@6 -- # set -e
00:04:13.693   22:31:14 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:04:13.693   22:31:14 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']'
00:04:13.693   22:31:14 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:13.693   22:31:14 event -- common/autotest_common.sh@10 -- # set +x
00:04:13.693  ************************************
00:04:13.693  START TEST event_perf
00:04:13.693  ************************************
00:04:13.693   22:31:14 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:04:13.693  Running I/O for 1 seconds...[2024-12-10 22:31:14.379681] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:04:13.693  [2024-12-10 22:31:14.379788] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid34210 ]
00:04:13.955  [2024-12-10 22:31:14.504997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:04:13.955  [2024-12-10 22:31:14.650601] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:04:13.955  [2024-12-10 22:31:14.650640] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:04:13.955  [2024-12-10 22:31:14.650707] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:04:13.955  [2024-12-10 22:31:14.650737] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:04:15.327  Running I/O for 1 seconds...
00:04:15.327  lcore  0:   180447
00:04:15.327  lcore  1:   180444
00:04:15.327  lcore  2:   180446
00:04:15.327  lcore  3:   180448
00:04:15.327  done.
00:04:15.327  
00:04:15.327  real	0m1.592s
00:04:15.327  user	0m4.442s
00:04:15.327  sys	0m0.140s
00:04:15.327   22:31:15 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:15.327   22:31:15 event.event_perf -- common/autotest_common.sh@10 -- # set +x
00:04:15.327  ************************************
00:04:15.327  END TEST event_perf
00:04:15.327  ************************************
00:04:15.328   22:31:15 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/reactor/reactor -t 1
00:04:15.328   22:31:15 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:04:15.328   22:31:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:15.328   22:31:15 event -- common/autotest_common.sh@10 -- # set +x
00:04:15.328  ************************************
00:04:15.328  START TEST event_reactor
00:04:15.328  ************************************
00:04:15.328   22:31:15 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/reactor/reactor -t 1
00:04:15.328  [2024-12-10 22:31:16.022389] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:04:15.328  [2024-12-10 22:31:16.022510] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid34439 ]
00:04:15.586  [2024-12-10 22:31:16.172473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:15.586  [2024-12-10 22:31:16.316370] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:04:16.963  test_start
00:04:16.963  oneshot
00:04:16.963  tick 100
00:04:16.963  tick 100
00:04:16.963  tick 250
00:04:16.963  tick 100
00:04:16.963  tick 100
00:04:16.963  tick 100
00:04:16.963  tick 250
00:04:16.963  tick 500
00:04:16.963  tick 100
00:04:16.963  tick 100
00:04:16.963  tick 250
00:04:16.963  tick 100
00:04:16.963  tick 100
00:04:16.963  test_end
00:04:16.963  
00:04:16.963  real	0m1.599s
00:04:16.963  user	0m1.447s
00:04:16.963  sys	0m0.144s
00:04:16.963   22:31:17 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:16.963   22:31:17 event.event_reactor -- common/autotest_common.sh@10 -- # set +x
00:04:16.963  ************************************
00:04:16.963  END TEST event_reactor
00:04:16.963  ************************************
00:04:16.963   22:31:17 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1
00:04:16.963   22:31:17 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:04:16.963   22:31:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:16.963   22:31:17 event -- common/autotest_common.sh@10 -- # set +x
00:04:16.963  ************************************
00:04:16.963  START TEST event_reactor_perf
00:04:16.963  ************************************
00:04:16.963   22:31:17 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1
00:04:16.963  [2024-12-10 22:31:17.658519] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:04:16.963  [2024-12-10 22:31:17.658602] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid34864 ]
00:04:17.222  [2024-12-10 22:31:17.797575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:17.222  [2024-12-10 22:31:17.935010] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:04:18.595  test_start
00:04:18.595  test_end
00:04:18.595  Performance:   227397 events per second
00:04:18.595  
00:04:18.595  real	0m1.574s
00:04:18.595  user	0m1.424s
00:04:18.595  sys	0m0.142s
00:04:18.595   22:31:19 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:18.595   22:31:19 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x
00:04:18.595  ************************************
00:04:18.595  END TEST event_reactor_perf
00:04:18.595  ************************************
00:04:18.595    22:31:19 event -- event/event.sh@49 -- # uname -s
00:04:18.595   22:31:19 event -- event/event.sh@49 -- # '[' Linux = Linux ']'
00:04:18.595   22:31:19 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler.sh
00:04:18.595   22:31:19 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:18.595   22:31:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:18.595   22:31:19 event -- common/autotest_common.sh@10 -- # set +x
00:04:18.595  ************************************
00:04:18.595  START TEST event_scheduler
00:04:18.595  ************************************
00:04:18.595   22:31:19 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler.sh
00:04:18.595  * Looking for test storage...
00:04:18.595  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler
00:04:18.595    22:31:19 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:04:18.595     22:31:19 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version
00:04:18.595     22:31:19 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:04:18.595    22:31:19 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:04:18.595    22:31:19 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:18.595    22:31:19 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:18.595    22:31:19 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:18.595    22:31:19 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-:
00:04:18.595    22:31:19 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1
00:04:18.595    22:31:19 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-:
00:04:18.595    22:31:19 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2
00:04:18.595    22:31:19 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<'
00:04:18.595    22:31:19 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2
00:04:18.595    22:31:19 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1
00:04:18.595    22:31:19 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:18.595    22:31:19 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in
00:04:18.595    22:31:19 event.event_scheduler -- scripts/common.sh@345 -- # : 1
00:04:18.595    22:31:19 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:18.595    22:31:19 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:18.595     22:31:19 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1
00:04:18.595     22:31:19 event.event_scheduler -- scripts/common.sh@353 -- # local d=1
00:04:18.595     22:31:19 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:18.595     22:31:19 event.event_scheduler -- scripts/common.sh@355 -- # echo 1
00:04:18.595    22:31:19 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1
00:04:18.595     22:31:19 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2
00:04:18.595     22:31:19 event.event_scheduler -- scripts/common.sh@353 -- # local d=2
00:04:18.595     22:31:19 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:18.595     22:31:19 event.event_scheduler -- scripts/common.sh@355 -- # echo 2
00:04:18.595    22:31:19 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2
00:04:18.595    22:31:19 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:18.595    22:31:19 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:18.595    22:31:19 event.event_scheduler -- scripts/common.sh@368 -- # return 0
00:04:18.595    22:31:19 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:18.595    22:31:19 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:04:18.595  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:18.595  		--rc genhtml_branch_coverage=1
00:04:18.595  		--rc genhtml_function_coverage=1
00:04:18.595  		--rc genhtml_legend=1
00:04:18.595  		--rc geninfo_all_blocks=1
00:04:18.595  		--rc geninfo_unexecuted_blocks=1
00:04:18.595  		
00:04:18.595  		'
00:04:18.596    22:31:19 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:04:18.596  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:18.596  		--rc genhtml_branch_coverage=1
00:04:18.596  		--rc genhtml_function_coverage=1
00:04:18.596  		--rc genhtml_legend=1
00:04:18.596  		--rc geninfo_all_blocks=1
00:04:18.596  		--rc geninfo_unexecuted_blocks=1
00:04:18.596  		
00:04:18.596  		'
00:04:18.596    22:31:19 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:04:18.596  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:18.596  		--rc genhtml_branch_coverage=1
00:04:18.596  		--rc genhtml_function_coverage=1
00:04:18.596  		--rc genhtml_legend=1
00:04:18.596  		--rc geninfo_all_blocks=1
00:04:18.596  		--rc geninfo_unexecuted_blocks=1
00:04:18.596  		
00:04:18.596  		'
00:04:18.596    22:31:19 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:04:18.596  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:18.596  		--rc genhtml_branch_coverage=1
00:04:18.596  		--rc genhtml_function_coverage=1
00:04:18.596  		--rc genhtml_legend=1
00:04:18.596  		--rc geninfo_all_blocks=1
00:04:18.596  		--rc geninfo_unexecuted_blocks=1
00:04:18.596  		
00:04:18.596  		'
00:04:18.596   22:31:19 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd
00:04:18.596   22:31:19 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f
00:04:18.596   22:31:19 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=35137
00:04:18.596   22:31:19 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT
00:04:18.596   22:31:19 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 35137
00:04:18.596   22:31:19 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 35137 ']'
00:04:18.596   22:31:19 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:04:18.596   22:31:19 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:18.596   22:31:19 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:04:18.596  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:04:18.596   22:31:19 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:18.596   22:31:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:04:18.855  [2024-12-10 22:31:19.451292] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:04:18.855  [2024-12-10 22:31:19.451401] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid35137 ]
00:04:18.855  [2024-12-10 22:31:19.557107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:04:19.113  [2024-12-10 22:31:19.662662] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:04:19.113  [2024-12-10 22:31:19.662710] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:04:19.113  [2024-12-10 22:31:19.662737] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:04:19.113  [2024-12-10 22:31:19.662759] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:04:19.680   22:31:20 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:19.680   22:31:20 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0
00:04:19.680   22:31:20 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic
00:04:19.680   22:31:20 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:19.680   22:31:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:04:19.680  [2024-12-10 22:31:20.317525] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings
00:04:19.680  [2024-12-10 22:31:20.317556] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor
00:04:19.680  [2024-12-10 22:31:20.317604] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20
00:04:19.680  [2024-12-10 22:31:20.317621] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80
00:04:19.680  [2024-12-10 22:31:20.317634] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95
00:04:19.680   22:31:20 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:19.680   22:31:20 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init
00:04:19.680   22:31:20 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:19.680   22:31:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:04:19.940  [2024-12-10 22:31:20.593708] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started.
00:04:19.940   22:31:20 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:19.940   22:31:20 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread
00:04:19.940   22:31:20 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:19.940   22:31:20 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:19.940   22:31:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:04:19.940  ************************************
00:04:19.940  START TEST scheduler_create_thread
00:04:19.940  ************************************
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:19.940  2
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:19.940  3
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:19.940  4
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:19.940  5
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:19.940  6
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:19.940  7
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:19.940  8
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:19.940  9
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:19.940  10
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:19.940    22:31:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0
00:04:19.940    22:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:19.940    22:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:19.940    22:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:19.940   22:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:20.198   22:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:20.198    22:31:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100
00:04:20.198    22:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:20.198    22:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:21.571    22:31:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:21.571   22:31:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12
00:04:21.571   22:31:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12
00:04:21.571   22:31:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:21.571   22:31:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:22.507   22:31:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:22.507  
00:04:22.507  real	0m2.621s
00:04:22.507  user	0m0.020s
00:04:22.507  sys	0m0.005s
00:04:22.507   22:31:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:22.507   22:31:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:04:22.507  ************************************
00:04:22.507  END TEST scheduler_create_thread
00:04:22.507  ************************************
00:04:22.507   22:31:23 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT
00:04:22.507   22:31:23 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 35137
00:04:22.507   22:31:23 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 35137 ']'
00:04:22.507   22:31:23 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 35137
00:04:22.507    22:31:23 event.event_scheduler -- common/autotest_common.sh@959 -- # uname
00:04:22.507   22:31:23 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:22.507    22:31:23 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 35137
00:04:22.765   22:31:23 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:04:22.765   22:31:23 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:04:22.765   22:31:23 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 35137'
00:04:22.765  killing process with pid 35137
00:04:22.765   22:31:23 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 35137
00:04:22.765   22:31:23 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 35137
00:04:23.024  [2024-12-10 22:31:23.728897] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped.
00:04:23.963  
00:04:23.963  real	0m5.412s
00:04:23.963  user	0m9.757s
00:04:23.963  sys	0m0.471s
00:04:23.963   22:31:24 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:23.963   22:31:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:04:23.963  ************************************
00:04:23.963  END TEST event_scheduler
00:04:23.963  ************************************
00:04:23.963   22:31:24 event -- event/event.sh@51 -- # modprobe -n nbd
00:04:23.963   22:31:24 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test
00:04:23.963   22:31:24 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:23.963   22:31:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:23.963   22:31:24 event -- common/autotest_common.sh@10 -- # set +x
00:04:23.963  ************************************
00:04:23.963  START TEST app_repeat
00:04:23.963  ************************************
00:04:23.963   22:31:24 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test
00:04:23.963   22:31:24 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:23.963   22:31:24 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:23.963   22:31:24 event.app_repeat -- event/event.sh@13 -- # local nbd_list
00:04:23.963   22:31:24 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1')
00:04:23.963   22:31:24 event.app_repeat -- event/event.sh@14 -- # local bdev_list
00:04:23.963   22:31:24 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4
00:04:23.963   22:31:24 event.app_repeat -- event/event.sh@17 -- # modprobe nbd
00:04:23.963   22:31:24 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4
00:04:23.963   22:31:24 event.app_repeat -- event/event.sh@19 -- # repeat_pid=36211
00:04:23.963   22:31:24 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT
00:04:23.963   22:31:24 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 36211'
00:04:23.963  Process app_repeat pid: 36211
00:04:23.963   22:31:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:04:23.963   22:31:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0'
00:04:23.963  spdk_app_start Round 0
00:04:23.963   22:31:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 36211 /var/tmp/spdk-nbd.sock
00:04:23.963   22:31:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 36211 ']'
00:04:23.963   22:31:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:04:23.963   22:31:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:23.963   22:31:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:04:23.963  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:04:23.963   22:31:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:23.963   22:31:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:04:24.222  [2024-12-10 22:31:24.750918] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:04:24.222  [2024-12-10 22:31:24.751007] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid36211 ]
00:04:24.222  [2024-12-10 22:31:24.876909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:04:24.486  [2024-12-10 22:31:25.016053] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:04:24.486  [2024-12-10 22:31:25.016057] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:04:25.059   22:31:25 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:25.059   22:31:25 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:04:25.059   22:31:25 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:04:25.317  Malloc0
00:04:25.317   22:31:25 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:04:25.577  Malloc1
00:04:25.577   22:31:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:04:25.577   22:31:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:25.577   22:31:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:04:25.577   22:31:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:04:25.577   22:31:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:25.577   22:31:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:04:25.577   22:31:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:04:25.577   22:31:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:25.577   22:31:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:04:25.577   22:31:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:04:25.577   22:31:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:25.577   22:31:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:04:25.577   22:31:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:04:25.577   22:31:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:04:25.577   22:31:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:04:25.577   22:31:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:04:25.835  /dev/nbd0
00:04:25.835    22:31:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:04:25.835   22:31:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:04:25.835   22:31:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:04:25.835   22:31:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:04:25.835   22:31:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:04:25.836   22:31:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:04:25.836   22:31:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:04:25.836   22:31:26 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:04:25.836   22:31:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:04:25.836   22:31:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:04:25.836   22:31:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:04:25.836  1+0 records in
00:04:25.836  1+0 records out
00:04:25.836  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189511 s, 21.6 MB/s
00:04:25.836    22:31:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:04:25.836   22:31:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:04:25.836   22:31:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:04:25.836   22:31:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:04:25.836   22:31:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:04:25.836   22:31:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:04:25.836   22:31:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:04:25.836   22:31:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:04:26.095  /dev/nbd1
00:04:26.095    22:31:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:04:26.095   22:31:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:04:26.095   22:31:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:04:26.095   22:31:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:04:26.095   22:31:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:04:26.095   22:31:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:04:26.095   22:31:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:04:26.095   22:31:26 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:04:26.095   22:31:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:04:26.095   22:31:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:04:26.095   22:31:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:04:26.095  1+0 records in
00:04:26.095  1+0 records out
00:04:26.095  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00017464 s, 23.5 MB/s
00:04:26.095    22:31:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:04:26.095   22:31:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:04:26.095   22:31:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:04:26.095   22:31:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:04:26.095   22:31:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:04:26.095   22:31:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:04:26.095   22:31:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:04:26.095    22:31:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:04:26.095    22:31:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:26.095     22:31:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:04:26.353    22:31:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:04:26.353    {
00:04:26.353      "nbd_device": "/dev/nbd0",
00:04:26.353      "bdev_name": "Malloc0"
00:04:26.353    },
00:04:26.353    {
00:04:26.353      "nbd_device": "/dev/nbd1",
00:04:26.353      "bdev_name": "Malloc1"
00:04:26.353    }
00:04:26.353  ]'
00:04:26.353     22:31:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:04:26.353    {
00:04:26.353      "nbd_device": "/dev/nbd0",
00:04:26.353      "bdev_name": "Malloc0"
00:04:26.353    },
00:04:26.353    {
00:04:26.353      "nbd_device": "/dev/nbd1",
00:04:26.353      "bdev_name": "Malloc1"
00:04:26.353    }
00:04:26.353  ]'
00:04:26.353     22:31:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:04:26.353    22:31:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:04:26.353  /dev/nbd1'
00:04:26.353     22:31:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:04:26.353  /dev/nbd1'
00:04:26.353     22:31:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:04:26.353    22:31:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:04:26.353    22:31:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:04:26.353   22:31:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:04:26.353   22:31:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:04:26.353   22:31:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:04:26.353   22:31:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:26.354   22:31:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:04:26.354   22:31:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:04:26.354   22:31:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:04:26.354   22:31:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:04:26.354   22:31:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256
00:04:26.354  256+0 records in
00:04:26.354  256+0 records out
00:04:26.354  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00360687 s, 291 MB/s
00:04:26.354   22:31:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:04:26.354   22:31:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:04:26.354  256+0 records in
00:04:26.354  256+0 records out
00:04:26.354  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201415 s, 52.1 MB/s
00:04:26.354   22:31:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:04:26.354   22:31:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:04:26.354  256+0 records in
00:04:26.354  256+0 records out
00:04:26.354  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246378 s, 42.6 MB/s
00:04:26.354   22:31:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:04:26.354   22:31:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:26.354   22:31:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:04:26.354   22:31:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:04:26.354   22:31:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:04:26.354   22:31:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:04:26.354   22:31:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:04:26.354   22:31:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:04:26.354   22:31:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0
00:04:26.354   22:31:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:04:26.354   22:31:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1
00:04:26.354   22:31:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:04:26.354   22:31:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:04:26.354   22:31:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:26.354   22:31:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:26.354   22:31:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:04:26.354   22:31:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:04:26.354   22:31:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:04:26.354   22:31:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:04:26.612    22:31:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:04:26.612   22:31:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:04:26.612   22:31:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:04:26.612   22:31:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:04:26.612   22:31:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:04:26.612   22:31:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:04:26.612   22:31:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:04:26.612   22:31:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:04:26.612   22:31:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:04:26.612   22:31:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:04:26.870    22:31:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:04:26.870   22:31:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:04:26.870   22:31:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:04:26.870   22:31:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:04:26.870   22:31:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:04:26.870   22:31:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:04:26.870   22:31:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:04:26.870   22:31:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:04:26.870    22:31:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:04:26.870    22:31:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:26.870     22:31:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:04:26.870    22:31:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:04:26.870     22:31:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:04:26.870     22:31:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:04:27.129    22:31:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:04:27.129     22:31:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:04:27.129     22:31:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:04:27.129     22:31:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:04:27.129    22:31:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:04:27.129    22:31:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:04:27.129   22:31:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:04:27.129   22:31:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:04:27.129   22:31:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:04:27.129   22:31:27 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:04:27.389   22:31:28 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:04:28.769  [2024-12-10 22:31:29.442915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:04:29.028  [2024-12-10 22:31:29.575631] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:04:29.028  [2024-12-10 22:31:29.575630] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:04:29.028  [2024-12-10 22:31:29.807211] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:04:29.028  [2024-12-10 22:31:29.807290] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:04:30.406   22:31:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:04:30.406   22:31:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1'
00:04:30.406  spdk_app_start Round 1
00:04:30.406   22:31:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 36211 /var/tmp/spdk-nbd.sock
00:04:30.406   22:31:31 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 36211 ']'
00:04:30.406   22:31:31 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:04:30.406   22:31:31 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:30.406   22:31:31 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:04:30.406  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:04:30.406   22:31:31 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:30.406   22:31:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:04:30.665   22:31:31 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:30.665   22:31:31 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:04:30.665   22:31:31 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:04:30.924  Malloc0
00:04:30.924   22:31:31 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:04:31.183  Malloc1
00:04:31.183   22:31:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:04:31.183   22:31:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:31.183   22:31:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:04:31.183   22:31:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:04:31.183   22:31:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:31.183   22:31:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:04:31.183   22:31:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:04:31.183   22:31:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:31.183   22:31:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:04:31.183   22:31:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:04:31.183   22:31:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:31.183   22:31:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:04:31.183   22:31:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:04:31.183   22:31:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:04:31.183   22:31:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:04:31.183   22:31:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:04:31.441  /dev/nbd0
00:04:31.441    22:31:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:04:31.441   22:31:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:04:31.441   22:31:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:04:31.441   22:31:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:04:31.441   22:31:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:04:31.441   22:31:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:04:31.441   22:31:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:04:31.441   22:31:32 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:04:31.441   22:31:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:04:31.441   22:31:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:04:31.441   22:31:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:04:31.441  1+0 records in
00:04:31.441  1+0 records out
00:04:31.441  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00014068 s, 29.1 MB/s
00:04:31.441    22:31:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:04:31.441   22:31:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:04:31.441   22:31:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:04:31.441   22:31:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:04:31.441   22:31:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:04:31.441   22:31:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:04:31.441   22:31:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:04:31.441   22:31:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:04:31.700  /dev/nbd1
00:04:31.700    22:31:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:04:31.700   22:31:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:04:31.700   22:31:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:04:31.700   22:31:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:04:31.700   22:31:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:04:31.700   22:31:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:04:31.700   22:31:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:04:31.700   22:31:32 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:04:31.700   22:31:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:04:31.700   22:31:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:04:31.700   22:31:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:04:31.700  1+0 records in
00:04:31.700  1+0 records out
00:04:31.700  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000159873 s, 25.6 MB/s
00:04:31.700    22:31:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:04:31.700   22:31:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:04:31.700   22:31:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:04:31.700   22:31:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:04:31.700   22:31:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:04:31.700   22:31:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:04:31.700   22:31:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:04:31.700    22:31:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:04:31.700    22:31:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:31.700     22:31:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:04:31.959    22:31:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:04:31.959    {
00:04:31.959      "nbd_device": "/dev/nbd0",
00:04:31.959      "bdev_name": "Malloc0"
00:04:31.959    },
00:04:31.959    {
00:04:31.959      "nbd_device": "/dev/nbd1",
00:04:31.959      "bdev_name": "Malloc1"
00:04:31.959    }
00:04:31.959  ]'
00:04:31.959     22:31:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:04:31.959     22:31:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:04:31.959    {
00:04:31.959      "nbd_device": "/dev/nbd0",
00:04:31.959      "bdev_name": "Malloc0"
00:04:31.959    },
00:04:31.959    {
00:04:31.959      "nbd_device": "/dev/nbd1",
00:04:31.959      "bdev_name": "Malloc1"
00:04:31.959    }
00:04:31.959  ]'
00:04:31.960    22:31:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:04:31.960  /dev/nbd1'
00:04:31.960     22:31:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:04:31.960  /dev/nbd1'
00:04:31.960     22:31:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:04:31.960    22:31:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:04:31.960    22:31:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:04:31.960   22:31:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:04:31.960   22:31:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:04:31.960   22:31:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:04:31.960   22:31:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:31.960   22:31:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:04:31.960   22:31:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:04:31.960   22:31:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:04:31.960   22:31:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:04:31.960   22:31:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256
00:04:31.960  256+0 records in
00:04:31.960  256+0 records out
00:04:31.960  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00366903 s, 286 MB/s
00:04:31.960   22:31:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:04:31.960   22:31:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:04:31.960  256+0 records in
00:04:31.960  256+0 records out
00:04:31.960  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205788 s, 51.0 MB/s
00:04:31.960   22:31:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:04:31.960   22:31:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:04:31.960  256+0 records in
00:04:31.960  256+0 records out
00:04:31.960  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240842 s, 43.5 MB/s
00:04:31.960   22:31:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:04:31.960   22:31:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:31.960   22:31:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:04:31.960   22:31:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:04:31.960   22:31:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:04:31.960   22:31:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:04:31.960   22:31:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:04:31.960   22:31:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:04:31.960   22:31:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0
00:04:31.960   22:31:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:04:31.960   22:31:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1
00:04:31.960   22:31:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:04:31.960   22:31:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:04:31.960   22:31:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:31.960   22:31:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:31.960   22:31:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:04:31.960   22:31:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:04:31.960   22:31:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:04:31.960   22:31:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:04:32.219    22:31:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:04:32.219   22:31:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:04:32.219   22:31:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:04:32.219   22:31:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:04:32.219   22:31:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:04:32.219   22:31:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:04:32.219   22:31:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:04:32.219   22:31:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:04:32.219   22:31:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:04:32.219   22:31:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:04:32.478    22:31:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:04:32.478   22:31:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:04:32.478   22:31:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:04:32.478   22:31:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:04:32.478   22:31:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:04:32.478   22:31:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:04:32.478   22:31:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:04:32.478   22:31:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:04:32.478    22:31:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:04:32.478    22:31:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:32.478     22:31:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:04:32.737    22:31:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:04:32.737     22:31:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:04:32.737     22:31:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:04:32.737    22:31:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:04:32.737     22:31:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:04:32.737     22:31:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:04:32.737     22:31:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:04:32.737    22:31:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:04:32.737    22:31:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:04:32.737   22:31:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:04:32.737   22:31:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:04:32.737   22:31:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:04:32.737   22:31:33 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:04:32.996   22:31:33 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:04:34.374  [2024-12-10 22:31:35.088498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:04:34.632  [2024-12-10 22:31:35.220739] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:04:34.632  [2024-12-10 22:31:35.220747] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:04:34.891  [2024-12-10 22:31:35.447806] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:04:34.891  [2024-12-10 22:31:35.447882] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:04:36.268   22:31:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:04:36.268   22:31:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2'
00:04:36.268  spdk_app_start Round 2
00:04:36.268   22:31:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 36211 /var/tmp/spdk-nbd.sock
00:04:36.268   22:31:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 36211 ']'
00:04:36.268   22:31:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:04:36.268   22:31:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:36.268   22:31:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:04:36.268  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:04:36.268   22:31:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:36.268   22:31:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:04:36.268   22:31:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:36.268   22:31:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:04:36.268   22:31:36 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:04:36.527  Malloc0
00:04:36.527   22:31:37 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:04:36.786  Malloc1
00:04:36.786   22:31:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:04:36.786   22:31:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:36.786   22:31:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:04:36.786   22:31:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:04:36.786   22:31:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:36.786   22:31:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:04:36.786   22:31:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:04:36.786   22:31:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:36.786   22:31:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:04:36.786   22:31:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:04:36.786   22:31:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:36.786   22:31:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:04:36.786   22:31:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:04:36.786   22:31:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:04:36.786   22:31:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:04:36.786   22:31:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:04:37.045  /dev/nbd0
00:04:37.045    22:31:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:04:37.045   22:31:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:04:37.045   22:31:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:04:37.045   22:31:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:04:37.045   22:31:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:04:37.045   22:31:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:04:37.045   22:31:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:04:37.045   22:31:37 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:04:37.045   22:31:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:04:37.045   22:31:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:04:37.045   22:31:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:04:37.045  1+0 records in
00:04:37.045  1+0 records out
00:04:37.045  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000146672 s, 27.9 MB/s
00:04:37.045    22:31:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:04:37.045   22:31:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:04:37.045   22:31:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:04:37.045   22:31:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:04:37.045   22:31:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:04:37.045   22:31:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:04:37.045   22:31:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:04:37.045   22:31:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:04:37.304  /dev/nbd1
00:04:37.304    22:31:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:04:37.304   22:31:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:04:37.304   22:31:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:04:37.304   22:31:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:04:37.304   22:31:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:04:37.304   22:31:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:04:37.304   22:31:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:04:37.304   22:31:37 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:04:37.304   22:31:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:04:37.304   22:31:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:04:37.304   22:31:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:04:37.304  1+0 records in
00:04:37.304  1+0 records out
00:04:37.304  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180456 s, 22.7 MB/s
00:04:37.304    22:31:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:04:37.304   22:31:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:04:37.304   22:31:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:04:37.304   22:31:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:04:37.304   22:31:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:04:37.304   22:31:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:04:37.304   22:31:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:04:37.304    22:31:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:04:37.304    22:31:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:37.304     22:31:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:04:37.564    22:31:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:04:37.564    {
00:04:37.564      "nbd_device": "/dev/nbd0",
00:04:37.564      "bdev_name": "Malloc0"
00:04:37.564    },
00:04:37.564    {
00:04:37.564      "nbd_device": "/dev/nbd1",
00:04:37.564      "bdev_name": "Malloc1"
00:04:37.564    }
00:04:37.564  ]'
00:04:37.564     22:31:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:04:37.564     22:31:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:04:37.564    {
00:04:37.564      "nbd_device": "/dev/nbd0",
00:04:37.564      "bdev_name": "Malloc0"
00:04:37.564    },
00:04:37.564    {
00:04:37.564      "nbd_device": "/dev/nbd1",
00:04:37.564      "bdev_name": "Malloc1"
00:04:37.564    }
00:04:37.564  ]'
00:04:37.564    22:31:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:04:37.564  /dev/nbd1'
00:04:37.564     22:31:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:04:37.564  /dev/nbd1'
00:04:37.564     22:31:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:04:37.564    22:31:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:04:37.564    22:31:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:04:37.564   22:31:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:04:37.564   22:31:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:04:37.564   22:31:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:04:37.564   22:31:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:37.564   22:31:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:04:37.564   22:31:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:04:37.564   22:31:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:04:37.564   22:31:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:04:37.564   22:31:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256
00:04:37.564  256+0 records in
00:04:37.564  256+0 records out
00:04:37.564  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00356619 s, 294 MB/s
00:04:37.564   22:31:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:04:37.564   22:31:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:04:37.564  256+0 records in
00:04:37.564  256+0 records out
00:04:37.564  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200688 s, 52.2 MB/s
00:04:37.564   22:31:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:04:37.564   22:31:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:04:37.564  256+0 records in
00:04:37.564  256+0 records out
00:04:37.564  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238343 s, 44.0 MB/s
00:04:37.564   22:31:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:04:37.564   22:31:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:37.564   22:31:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:04:37.564   22:31:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:04:37.564   22:31:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:04:37.564   22:31:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:04:37.564   22:31:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:04:37.564   22:31:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:04:37.564   22:31:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0
00:04:37.564   22:31:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:04:37.564   22:31:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1
00:04:37.564   22:31:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:04:37.564   22:31:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:04:37.564   22:31:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:37.564   22:31:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:04:37.564   22:31:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:04:37.564   22:31:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:04:37.564   22:31:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:04:37.564   22:31:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:04:37.822    22:31:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:04:37.822   22:31:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:04:37.822   22:31:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:04:37.822   22:31:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:04:37.822   22:31:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:04:37.822   22:31:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:04:37.822   22:31:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:04:37.822   22:31:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:04:37.822   22:31:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:04:37.822   22:31:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:04:38.081    22:31:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:04:38.081   22:31:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:04:38.081   22:31:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:04:38.081   22:31:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:04:38.081   22:31:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:04:38.081   22:31:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:04:38.081   22:31:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:04:38.081   22:31:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:04:38.081    22:31:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:04:38.081    22:31:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:04:38.081     22:31:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:04:38.339    22:31:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:04:38.339     22:31:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:04:38.339     22:31:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:04:38.339    22:31:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:04:38.339     22:31:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:04:38.339     22:31:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:04:38.339     22:31:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:04:38.339    22:31:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:04:38.339    22:31:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:04:38.339   22:31:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:04:38.339   22:31:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:04:38.339   22:31:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:04:38.339   22:31:38 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:04:38.905   22:31:39 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:04:40.282  [2024-12-10 22:31:40.753910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:04:40.282  [2024-12-10 22:31:40.887783] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:04:40.282  [2024-12-10 22:31:40.887787] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:04:40.541  [2024-12-10 22:31:41.118661] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:04:40.541  [2024-12-10 22:31:41.118729] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:04:41.927   22:31:42 event.app_repeat -- event/event.sh@38 -- # waitforlisten 36211 /var/tmp/spdk-nbd.sock
00:04:41.927   22:31:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 36211 ']'
00:04:41.927   22:31:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:04:41.927   22:31:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:41.927   22:31:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:04:41.927  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:04:41.927   22:31:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:41.927   22:31:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:04:41.927   22:31:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:41.927   22:31:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:04:41.927   22:31:42 event.app_repeat -- event/event.sh@39 -- # killprocess 36211
00:04:41.927   22:31:42 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 36211 ']'
00:04:41.927   22:31:42 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 36211
00:04:41.927    22:31:42 event.app_repeat -- common/autotest_common.sh@959 -- # uname
00:04:41.927   22:31:42 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:41.927    22:31:42 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 36211
00:04:41.927   22:31:42 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:04:41.927   22:31:42 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:04:41.927   22:31:42 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 36211'
00:04:41.927  killing process with pid 36211
00:04:41.927   22:31:42 event.app_repeat -- common/autotest_common.sh@973 -- # kill 36211
00:04:41.927   22:31:42 event.app_repeat -- common/autotest_common.sh@978 -- # wait 36211
00:04:43.306  spdk_app_start is called in Round 0.
00:04:43.306  Shutdown signal received, stop current app iteration
00:04:43.306  Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 reinitialization...
00:04:43.306  spdk_app_start is called in Round 1.
00:04:43.306  Shutdown signal received, stop current app iteration
00:04:43.306  Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 reinitialization...
00:04:43.306  spdk_app_start is called in Round 2.
00:04:43.306  Shutdown signal received, stop current app iteration
00:04:43.306  Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 reinitialization...
00:04:43.306  spdk_app_start is called in Round 3.
00:04:43.306  Shutdown signal received, stop current app iteration
00:04:43.306   22:31:43 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT
00:04:43.306   22:31:43 event.app_repeat -- event/event.sh@42 -- # return 0
00:04:43.306  
00:04:43.306  real	0m19.118s
00:04:43.306  user	0m40.324s
00:04:43.306  sys	0m2.798s
00:04:43.306   22:31:43 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:43.306   22:31:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:04:43.306  ************************************
00:04:43.306  END TEST app_repeat
00:04:43.306  ************************************
00:04:43.306   22:31:43 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 ))
00:04:43.306   22:31:43 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/cpu_locks.sh
00:04:43.306   22:31:43 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:43.306   22:31:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:43.306   22:31:43 event -- common/autotest_common.sh@10 -- # set +x
00:04:43.306  ************************************
00:04:43.306  START TEST cpu_locks
00:04:43.306  ************************************
00:04:43.306   22:31:43 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/cpu_locks.sh
00:04:43.306  * Looking for test storage...
00:04:43.306  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event
00:04:43.306    22:31:43 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:04:43.306     22:31:43 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:04:43.306     22:31:43 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version
00:04:43.306    22:31:43 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:04:43.306    22:31:43 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:43.306    22:31:43 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:43.306    22:31:43 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:43.306    22:31:43 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-:
00:04:43.306    22:31:43 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1
00:04:43.306    22:31:43 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-:
00:04:43.306    22:31:43 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2
00:04:43.306    22:31:43 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<'
00:04:43.306    22:31:43 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2
00:04:43.306    22:31:43 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1
00:04:43.306    22:31:43 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:43.306    22:31:43 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in
00:04:43.306    22:31:43 event.cpu_locks -- scripts/common.sh@345 -- # : 1
00:04:43.306    22:31:43 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:43.306    22:31:43 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:43.306     22:31:43 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1
00:04:43.306     22:31:43 event.cpu_locks -- scripts/common.sh@353 -- # local d=1
00:04:43.306     22:31:43 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:43.306     22:31:43 event.cpu_locks -- scripts/common.sh@355 -- # echo 1
00:04:43.306    22:31:43 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1
00:04:43.306     22:31:43 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2
00:04:43.306     22:31:43 event.cpu_locks -- scripts/common.sh@353 -- # local d=2
00:04:43.306     22:31:43 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:43.306     22:31:43 event.cpu_locks -- scripts/common.sh@355 -- # echo 2
00:04:43.306    22:31:43 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2
00:04:43.306    22:31:43 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:43.306    22:31:43 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:43.306    22:31:43 event.cpu_locks -- scripts/common.sh@368 -- # return 0
00:04:43.306    22:31:43 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:43.306    22:31:43 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:04:43.306  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:43.306  		--rc genhtml_branch_coverage=1
00:04:43.306  		--rc genhtml_function_coverage=1
00:04:43.306  		--rc genhtml_legend=1
00:04:43.306  		--rc geninfo_all_blocks=1
00:04:43.306  		--rc geninfo_unexecuted_blocks=1
00:04:43.306  		
00:04:43.306  		'
00:04:43.306    22:31:43 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:04:43.306  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:43.306  		--rc genhtml_branch_coverage=1
00:04:43.306  		--rc genhtml_function_coverage=1
00:04:43.306  		--rc genhtml_legend=1
00:04:43.306  		--rc geninfo_all_blocks=1
00:04:43.306  		--rc geninfo_unexecuted_blocks=1
00:04:43.306  		
00:04:43.306  		'
00:04:43.306    22:31:43 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:04:43.306  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:43.306  		--rc genhtml_branch_coverage=1
00:04:43.306  		--rc genhtml_function_coverage=1
00:04:43.306  		--rc genhtml_legend=1
00:04:43.306  		--rc geninfo_all_blocks=1
00:04:43.306  		--rc geninfo_unexecuted_blocks=1
00:04:43.306  		
00:04:43.306  		'
00:04:43.306    22:31:43 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:04:43.306  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:43.306  		--rc genhtml_branch_coverage=1
00:04:43.307  		--rc genhtml_function_coverage=1
00:04:43.307  		--rc genhtml_legend=1
00:04:43.307  		--rc geninfo_all_blocks=1
00:04:43.307  		--rc geninfo_unexecuted_blocks=1
00:04:43.307  		
00:04:43.307  		'
00:04:43.307   22:31:43 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock
00:04:43.307   22:31:43 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock
00:04:43.307   22:31:43 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT
00:04:43.307   22:31:43 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks
00:04:43.307   22:31:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:43.307   22:31:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:43.307   22:31:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:04:43.307  ************************************
00:04:43.307  START TEST default_locks
00:04:43.307  ************************************
00:04:43.307   22:31:44 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks
00:04:43.307   22:31:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=39741
00:04:43.307   22:31:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:04:43.307   22:31:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 39741
00:04:43.307   22:31:44 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 39741 ']'
00:04:43.307   22:31:44 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:04:43.307   22:31:44 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:43.307   22:31:44 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:04:43.307  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:04:43.307   22:31:44 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:43.307   22:31:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:04:43.566  [2024-12-10 22:31:44.105856] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:04:43.566  [2024-12-10 22:31:44.105966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid39741 ]
00:04:43.566  [2024-12-10 22:31:44.231328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:43.824  [2024-12-10 22:31:44.368004] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:04:44.762   22:31:45 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:44.762   22:31:45 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0
00:04:44.762   22:31:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 39741
00:04:44.762   22:31:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 39741
00:04:44.762   22:31:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:04:45.021  lslocks: write error
00:04:45.021   22:31:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 39741
00:04:45.021   22:31:45 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 39741 ']'
00:04:45.021   22:31:45 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 39741
00:04:45.021    22:31:45 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname
00:04:45.021   22:31:45 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:45.021    22:31:45 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 39741
00:04:45.021   22:31:45 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:04:45.021   22:31:45 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:04:45.021   22:31:45 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 39741'
00:04:45.021  killing process with pid 39741
00:04:45.021   22:31:45 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 39741
00:04:45.021   22:31:45 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 39741
00:04:47.558   22:31:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 39741
00:04:47.558   22:31:48 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0
00:04:47.558   22:31:48 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 39741
00:04:47.558   22:31:48 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:04:47.559   22:31:48 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:47.559    22:31:48 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:04:47.559   22:31:48 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:47.559   22:31:48 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 39741
00:04:47.559   22:31:48 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 39741 ']'
00:04:47.559   22:31:48 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:04:47.559   22:31:48 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:47.559   22:31:48 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:04:47.559  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:04:47.559   22:31:48 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:47.559   22:31:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:04:47.559  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (39741) - No such process
00:04:47.559  ERROR: process (pid: 39741) is no longer running
00:04:47.559   22:31:48 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:47.559   22:31:48 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1
00:04:47.559   22:31:48 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1
00:04:47.559   22:31:48 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:04:47.559   22:31:48 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:04:47.559   22:31:48 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:04:47.559   22:31:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks
00:04:47.559   22:31:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=()
00:04:47.559   22:31:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files
00:04:47.559   22:31:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:04:47.559  
00:04:47.559  real	0m4.300s
00:04:47.559  user	0m4.193s
00:04:47.559  sys	0m0.696s
00:04:47.559   22:31:48 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:47.559   22:31:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:04:47.559  ************************************
00:04:47.559  END TEST default_locks
00:04:47.559  ************************************
00:04:47.559   22:31:48 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc
00:04:47.559   22:31:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:47.559   22:31:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:47.559   22:31:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:04:47.818  ************************************
00:04:47.818  START TEST default_locks_via_rpc
00:04:47.818  ************************************
00:04:47.818   22:31:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc
00:04:47.818   22:31:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:04:47.818   22:31:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=40587
00:04:47.818   22:31:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 40587
00:04:47.818   22:31:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 40587 ']'
00:04:47.818   22:31:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:04:47.818   22:31:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:47.818   22:31:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:04:47.818  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:04:47.818   22:31:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:47.818   22:31:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:47.818  [2024-12-10 22:31:48.463814] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:04:47.818  [2024-12-10 22:31:48.463941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid40587 ]
00:04:47.818  [2024-12-10 22:31:48.597692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:48.077  [2024-12-10 22:31:48.737878] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:04:49.014   22:31:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:49.014   22:31:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:04:49.014   22:31:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks
00:04:49.014   22:31:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:49.014   22:31:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:49.014   22:31:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:49.014   22:31:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks
00:04:49.014   22:31:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=()
00:04:49.014   22:31:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files
00:04:49.014   22:31:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:04:49.014   22:31:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks
00:04:49.014   22:31:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:49.014   22:31:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:49.014   22:31:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:49.014   22:31:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 40587
00:04:49.014   22:31:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 40587
00:04:49.014   22:31:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:04:49.274   22:31:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 40587
00:04:49.274   22:31:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 40587 ']'
00:04:49.274   22:31:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 40587
00:04:49.274    22:31:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname
00:04:49.274   22:31:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:49.274    22:31:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 40587
00:04:49.274   22:31:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:04:49.274   22:31:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:04:49.274   22:31:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 40587'
00:04:49.274  killing process with pid 40587
00:04:49.274   22:31:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 40587
00:04:49.274   22:31:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 40587
00:04:52.565  
00:04:52.565  real	0m4.343s
00:04:52.565  user	0m4.232s
00:04:52.565  sys	0m0.698s
00:04:52.565   22:31:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:52.565   22:31:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:52.565  ************************************
00:04:52.565  END TEST default_locks_via_rpc
00:04:52.565  ************************************
00:04:52.565   22:31:52 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask
00:04:52.565   22:31:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:52.565   22:31:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:52.565   22:31:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:04:52.565  ************************************
00:04:52.565  START TEST non_locking_app_on_locked_coremask
00:04:52.565  ************************************
00:04:52.565   22:31:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask
00:04:52.565   22:31:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:04:52.565   22:31:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=41322
00:04:52.565   22:31:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 41322 /var/tmp/spdk.sock
00:04:52.565   22:31:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 41322 ']'
00:04:52.565   22:31:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:04:52.565   22:31:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:52.565   22:31:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:04:52.565  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:04:52.565   22:31:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:52.565   22:31:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:04:52.565  [2024-12-10 22:31:52.862177] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:04:52.565  [2024-12-10 22:31:52.862287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid41322 ]
00:04:52.565  [2024-12-10 22:31:52.986778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:52.565  [2024-12-10 22:31:53.125202] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:04:53.503   22:31:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:53.503   22:31:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:04:53.503   22:31:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=41649
00:04:53.503   22:31:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock
00:04:53.503   22:31:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 41649 /var/tmp/spdk2.sock
00:04:53.503   22:31:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 41649 ']'
00:04:53.503   22:31:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:04:53.503   22:31:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:53.503   22:31:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:04:53.503  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:04:53.503   22:31:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:53.503   22:31:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:04:53.503  [2024-12-10 22:31:54.241216] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:04:53.503  [2024-12-10 22:31:54.241321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid41649 ]
00:04:53.762  [2024-12-10 22:31:54.436570] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:04:53.762  [2024-12-10 22:31:54.436631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:54.021  [2024-12-10 22:31:54.716259] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:04:56.553   22:31:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:56.553   22:31:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:04:56.553   22:31:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 41322
00:04:56.553   22:31:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 41322
00:04:56.553   22:31:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:04:56.553  lslocks: write error
00:04:56.553   22:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 41322
00:04:56.553   22:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 41322 ']'
00:04:56.553   22:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 41322
00:04:56.553    22:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:04:56.553   22:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:56.553    22:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 41322
00:04:56.553   22:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:04:56.553   22:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:04:56.553   22:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 41322'
00:04:56.553  killing process with pid 41322
00:04:56.553   22:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 41322
00:04:56.553   22:31:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 41322
00:05:01.822   22:32:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 41649
00:05:01.822   22:32:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 41649 ']'
00:05:01.822   22:32:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 41649
00:05:01.822    22:32:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:05:01.822   22:32:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:01.822    22:32:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 41649
00:05:01.822   22:32:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:01.822   22:32:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:01.822   22:32:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 41649'
00:05:01.822  killing process with pid 41649
00:05:01.822   22:32:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 41649
00:05:01.822   22:32:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 41649
00:05:05.108  
00:05:05.108  real	0m12.488s
00:05:05.108  user	0m12.625s
00:05:05.108  sys	0m1.369s
00:05:05.108   22:32:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:05.109   22:32:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:05.109  ************************************
00:05:05.109  END TEST non_locking_app_on_locked_coremask
00:05:05.109  ************************************
00:05:05.109   22:32:05 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask
00:05:05.109   22:32:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:05.109   22:32:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:05.109   22:32:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:05:05.109  ************************************
00:05:05.109  START TEST locking_app_on_unlocked_coremask
00:05:05.109  ************************************
00:05:05.109   22:32:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask
00:05:05.109   22:32:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=43545
00:05:05.109   22:32:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 43545 /var/tmp/spdk.sock
00:05:05.109   22:32:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks
00:05:05.109   22:32:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 43545 ']'
00:05:05.109   22:32:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:05.109   22:32:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:05.109   22:32:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:05.109  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:05.109   22:32:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:05.109   22:32:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:05.109  [2024-12-10 22:32:05.397676] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:05:05.109  [2024-12-10 22:32:05.397821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43545 ]
00:05:05.109  [2024-12-10 22:32:05.529410] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:05:05.109  [2024-12-10 22:32:05.529469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:05.109  [2024-12-10 22:32:05.669438] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:05:06.046   22:32:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:06.046   22:32:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0
00:05:06.046   22:32:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:05:06.046   22:32:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=43756
00:05:06.046   22:32:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 43756 /var/tmp/spdk2.sock
00:05:06.046   22:32:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 43756 ']'
00:05:06.046   22:32:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:05:06.046   22:32:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:06.046   22:32:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:05:06.046  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:05:06.046   22:32:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:06.046   22:32:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:06.046  [2024-12-10 22:32:06.755639] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:05:06.046  [2024-12-10 22:32:06.755778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43756 ]
00:05:06.305  [2024-12-10 22:32:06.953954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:06.564  [2024-12-10 22:32:07.229694] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:05:09.099   22:32:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:09.099   22:32:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0
00:05:09.099   22:32:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 43756
00:05:09.099   22:32:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 43756
00:05:09.099   22:32:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:05:09.099  lslocks: write error
00:05:09.099   22:32:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 43545
00:05:09.099   22:32:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 43545 ']'
00:05:09.099   22:32:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 43545
00:05:09.099    22:32:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname
00:05:09.099   22:32:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:09.099    22:32:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 43545
00:05:09.099   22:32:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:09.099   22:32:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:09.100   22:32:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 43545'
00:05:09.100  killing process with pid 43545
00:05:09.100   22:32:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 43545
00:05:09.100   22:32:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 43545
00:05:14.373   22:32:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 43756
00:05:14.373   22:32:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 43756 ']'
00:05:14.373   22:32:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 43756
00:05:14.373    22:32:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname
00:05:14.373   22:32:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:14.373    22:32:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 43756
00:05:14.373   22:32:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:14.373   22:32:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:14.373   22:32:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 43756'
00:05:14.373  killing process with pid 43756
00:05:14.373   22:32:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 43756
00:05:14.373   22:32:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 43756
00:05:17.661  
00:05:17.661  real	0m12.493s
00:05:17.661  user	0m12.639s
00:05:17.661  sys	0m1.366s
00:05:17.661   22:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:17.661   22:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:17.661  ************************************
00:05:17.661  END TEST locking_app_on_unlocked_coremask
00:05:17.661  ************************************
00:05:17.661   22:32:17 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask
00:05:17.661   22:32:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:17.661   22:32:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:17.661   22:32:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:05:17.661  ************************************
00:05:17.661  START TEST locking_app_on_locked_coremask
00:05:17.661  ************************************
00:05:17.661   22:32:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask
00:05:17.661   22:32:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=45842
00:05:17.661   22:32:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:05:17.661   22:32:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 45842 /var/tmp/spdk.sock
00:05:17.661   22:32:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 45842 ']'
00:05:17.661   22:32:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:17.661   22:32:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:17.661   22:32:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:17.661  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:17.662   22:32:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:17.662   22:32:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:17.662  [2024-12-10 22:32:17.926087] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:05:17.662  [2024-12-10 22:32:17.926192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid45842 ]
00:05:17.662  [2024-12-10 22:32:18.056190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:17.662  [2024-12-10 22:32:18.195319] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:05:18.599   22:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:18.599   22:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:05:18.599   22:32:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:05:18.599   22:32:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=46058
00:05:18.599   22:32:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 46058 /var/tmp/spdk2.sock
00:05:18.600   22:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0
00:05:18.600   22:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 46058 /var/tmp/spdk2.sock
00:05:18.600   22:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:05:18.600   22:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:18.600    22:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:05:18.600   22:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:18.600   22:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 46058 /var/tmp/spdk2.sock
00:05:18.600   22:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 46058 ']'
00:05:18.600   22:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:05:18.600   22:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:18.600   22:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:05:18.600  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:05:18.600   22:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:18.600   22:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:18.600  [2024-12-10 22:32:19.308942] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:05:18.600  [2024-12-10 22:32:19.309054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid46058 ]
00:05:18.859  [2024-12-10 22:32:19.506623] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 45842 has claimed it.
00:05:18.859  [2024-12-10 22:32:19.506701] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:05:19.427  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (46058) - No such process
00:05:19.428  ERROR: process (pid: 46058) is no longer running
00:05:19.428   22:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:19.428   22:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1
00:05:19.428   22:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1
00:05:19.428   22:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:05:19.428   22:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:05:19.428   22:32:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:05:19.428   22:32:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 45842
00:05:19.428   22:32:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 45842
00:05:19.428   22:32:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:05:19.428  lslocks: write error
00:05:19.428   22:32:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 45842
00:05:19.428   22:32:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 45842 ']'
00:05:19.428   22:32:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 45842
00:05:19.428    22:32:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:05:19.428   22:32:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:19.428    22:32:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 45842
00:05:19.428   22:32:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:19.428   22:32:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:19.428   22:32:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 45842'
00:05:19.428  killing process with pid 45842
00:05:19.428   22:32:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 45842
00:05:19.428   22:32:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 45842
00:05:22.718  
00:05:22.718  real	0m5.071s
00:05:22.718  user	0m5.145s
00:05:22.718  sys	0m0.908s
00:05:22.718   22:32:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:22.718   22:32:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:22.718  ************************************
00:05:22.718  END TEST locking_app_on_locked_coremask
00:05:22.718  ************************************
00:05:22.718   22:32:22 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask
00:05:22.718   22:32:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:22.718   22:32:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:22.718   22:32:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:05:22.718  ************************************
00:05:22.718  START TEST locking_overlapped_coremask
00:05:22.718  ************************************
00:05:22.718   22:32:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask
00:05:22.718   22:32:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7
00:05:22.718   22:32:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=46702
00:05:22.718   22:32:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 46702 /var/tmp/spdk.sock
00:05:22.718   22:32:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 46702 ']'
00:05:22.718   22:32:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:22.718   22:32:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:22.718   22:32:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:22.718  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:22.719   22:32:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:22.719   22:32:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:22.719  [2024-12-10 22:32:23.050762] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:05:22.719  [2024-12-10 22:32:23.050879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid46702 ]
00:05:22.719  [2024-12-10 22:32:23.187491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:05:22.719  [2024-12-10 22:32:23.331071] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:05:22.719  [2024-12-10 22:32:23.331116] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:05:22.719  [2024-12-10 22:32:23.331118] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:05:23.656   22:32:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:23.656   22:32:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0
00:05:23.656   22:32:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock
00:05:23.656   22:32:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=46924
00:05:23.656   22:32:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 46924 /var/tmp/spdk2.sock
00:05:23.656   22:32:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0
00:05:23.656   22:32:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 46924 /var/tmp/spdk2.sock
00:05:23.656   22:32:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:05:23.656   22:32:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:23.656    22:32:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:05:23.656   22:32:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:23.656   22:32:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 46924 /var/tmp/spdk2.sock
00:05:23.656   22:32:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 46924 ']'
00:05:23.656   22:32:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:05:23.656   22:32:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:23.656   22:32:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:05:23.656  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:05:23.656   22:32:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:23.656   22:32:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:23.916  [2024-12-10 22:32:24.465021] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:05:23.916  [2024-12-10 22:32:24.465141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid46924 ]
00:05:23.916  [2024-12-10 22:32:24.628135] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 46702 has claimed it.
00:05:23.916  [2024-12-10 22:32:24.628213] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:05:24.485  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (46924) - No such process
00:05:24.485  ERROR: process (pid: 46924) is no longer running
00:05:24.485   22:32:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:24.485   22:32:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1
00:05:24.485   22:32:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1
00:05:24.485   22:32:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:05:24.485   22:32:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:05:24.485   22:32:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:05:24.485   22:32:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks
00:05:24.485   22:32:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:05:24.485   22:32:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:05:24.485   22:32:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:05:24.485   22:32:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 46702
00:05:24.485   22:32:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 46702 ']'
00:05:24.485   22:32:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 46702
00:05:24.485    22:32:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname
00:05:24.485   22:32:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:24.485    22:32:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 46702
00:05:24.485   22:32:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:24.485   22:32:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:24.485   22:32:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 46702'
00:05:24.485  killing process with pid 46702
00:05:24.485   22:32:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 46702
00:05:24.485   22:32:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 46702
00:05:27.776  
00:05:27.776  real	0m4.877s
00:05:27.776  user	0m13.158s
00:05:27.776  sys	0m0.765s
00:05:27.776   22:32:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:27.776   22:32:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:05:27.776  ************************************
00:05:27.776  END TEST locking_overlapped_coremask
00:05:27.776  ************************************
00:05:27.776   22:32:27 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc
00:05:27.776   22:32:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:27.776   22:32:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:27.776   22:32:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:05:27.776  ************************************
00:05:27.776  START TEST locking_overlapped_coremask_via_rpc
00:05:27.776  ************************************
00:05:27.776   22:32:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc
00:05:27.776   22:32:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks
00:05:27.776   22:32:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=47568
00:05:27.776   22:32:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 47568 /var/tmp/spdk.sock
00:05:27.776   22:32:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 47568 ']'
00:05:27.776   22:32:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:27.776   22:32:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:27.776   22:32:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:27.776  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:27.776   22:32:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:27.776   22:32:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:27.776  [2024-12-10 22:32:27.976340] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:05:27.776  [2024-12-10 22:32:27.976449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid47568 ]
00:05:27.776  [2024-12-10 22:32:28.112087] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:05:27.776  [2024-12-10 22:32:28.112148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:05:27.776  [2024-12-10 22:32:28.257828] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:05:27.776  [2024-12-10 22:32:28.257835] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:05:27.776  [2024-12-10 22:32:28.257839] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:05:28.714   22:32:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:28.714   22:32:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:05:28.714   22:32:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=47784
00:05:28.714   22:32:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks
00:05:28.714   22:32:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 47784 /var/tmp/spdk2.sock
00:05:28.714   22:32:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 47784 ']'
00:05:28.714   22:32:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:05:28.714   22:32:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:28.714   22:32:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:05:28.714  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:05:28.714   22:32:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:28.714   22:32:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:28.714  [2024-12-10 22:32:29.377284] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:05:28.714  [2024-12-10 22:32:29.377387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid47784 ]
00:05:28.973  [2024-12-10 22:32:29.554548] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:05:28.973  [2024-12-10 22:32:29.554596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:05:29.232  [2024-12-10 22:32:29.777332] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:05:29.232  [2024-12-10 22:32:29.777377] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:05:29.232  [2024-12-10 22:32:29.777399] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4
00:05:31.765   22:32:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:31.765   22:32:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:05:31.765   22:32:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks
00:05:31.765   22:32:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:31.765   22:32:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:31.765   22:32:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:31.765   22:32:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:05:31.765   22:32:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0
00:05:31.765   22:32:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:05:31.765   22:32:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:05:31.765   22:32:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:31.765    22:32:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:05:31.765   22:32:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:31.765   22:32:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:05:31.765   22:32:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:31.766   22:32:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:31.766  [2024-12-10 22:32:32.008904] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 47568 has claimed it.
00:05:31.766  request:
00:05:31.766  {
00:05:31.766  "method": "framework_enable_cpumask_locks",
00:05:31.766  "req_id": 1
00:05:31.766  }
00:05:31.766  Got JSON-RPC error response
00:05:31.766  response:
00:05:31.766  {
00:05:31.766  "code": -32603,
00:05:31.766  "message": "Failed to claim CPU core: 2"
00:05:31.766  }
00:05:31.766   22:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:05:31.766   22:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1
00:05:31.766   22:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:05:31.766   22:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:05:31.766   22:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:05:31.766   22:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 47568 /var/tmp/spdk.sock
00:05:31.766   22:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 47568 ']'
00:05:31.766   22:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:31.766   22:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:31.766   22:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:31.766  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:31.766   22:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:31.766   22:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:31.766   22:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:31.766   22:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:05:31.766   22:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 47784 /var/tmp/spdk2.sock
00:05:31.766   22:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 47784 ']'
00:05:31.766   22:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:05:31.766   22:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:31.766   22:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:05:31.766  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:05:31.766   22:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:31.766   22:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:31.766   22:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:31.766   22:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:05:31.766   22:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks
00:05:31.766   22:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:05:31.766   22:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:05:31.766   22:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:05:31.766  
00:05:31.766  real	0m4.564s
00:05:31.766  user	0m1.357s
00:05:31.766  sys	0m0.198s
00:05:31.766   22:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:31.766   22:32:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:31.766  ************************************
00:05:31.766  END TEST locking_overlapped_coremask_via_rpc
00:05:31.766  ************************************
00:05:31.766   22:32:32 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup
00:05:31.766   22:32:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 47568 ]]
00:05:31.766   22:32:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 47568
00:05:31.766   22:32:32 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 47568 ']'
00:05:31.766   22:32:32 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 47568
00:05:31.766    22:32:32 event.cpu_locks -- common/autotest_common.sh@959 -- # uname
00:05:31.766   22:32:32 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:31.766    22:32:32 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 47568
00:05:31.766   22:32:32 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:31.766   22:32:32 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:31.766   22:32:32 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 47568'
00:05:31.766  killing process with pid 47568
00:05:31.766   22:32:32 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 47568
00:05:31.766   22:32:32 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 47568
00:05:35.055   22:32:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 47784 ]]
00:05:35.055   22:32:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 47784
00:05:35.055   22:32:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 47784 ']'
00:05:35.055   22:32:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 47784
00:05:35.055    22:32:35 event.cpu_locks -- common/autotest_common.sh@959 -- # uname
00:05:35.055   22:32:35 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:35.055    22:32:35 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 47784
00:05:35.055   22:32:35 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:05:35.055   22:32:35 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:05:35.055   22:32:35 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 47784'
00:05:35.055  killing process with pid 47784
00:05:35.055   22:32:35 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 47784
00:05:35.055   22:32:35 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 47784
00:05:36.960   22:32:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f
00:05:36.960   22:32:37 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup
00:05:36.960   22:32:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 47568 ]]
00:05:36.960   22:32:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 47568
00:05:36.960   22:32:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 47568 ']'
00:05:36.960   22:32:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 47568
00:05:36.960  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (47568) - No such process
00:05:36.960   22:32:37 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 47568 is not found'
00:05:36.960  Process with pid 47568 is not found
00:05:36.960   22:32:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 47784 ]]
00:05:36.960   22:32:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 47784
00:05:36.960   22:32:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 47784 ']'
00:05:36.960   22:32:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 47784
00:05:36.960  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (47784) - No such process
00:05:36.960   22:32:37 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 47784 is not found'
00:05:36.960  Process with pid 47784 is not found
00:05:36.960   22:32:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f
00:05:36.960  
00:05:36.960  real	0m53.361s
00:05:36.960  user	1m30.101s
00:05:36.960  sys	0m7.211s
00:05:36.960   22:32:37 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:36.960   22:32:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:05:36.960  ************************************
00:05:36.960  END TEST cpu_locks
00:05:36.960  ************************************
00:05:36.960  
00:05:36.960  real	1m23.050s
00:05:36.960  user	2m27.671s
00:05:36.960  sys	0m11.148s
00:05:36.960   22:32:37 event -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:36.960   22:32:37 event -- common/autotest_common.sh@10 -- # set +x
00:05:36.960  ************************************
00:05:36.960  END TEST event
00:05:36.960  ************************************
00:05:36.960   22:32:37  -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread/thread.sh
00:05:36.960   22:32:37  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:36.960   22:32:37  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:36.960   22:32:37  -- common/autotest_common.sh@10 -- # set +x
00:05:36.960  ************************************
00:05:36.960  START TEST thread
00:05:36.960  ************************************
00:05:36.960   22:32:37 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread/thread.sh
00:05:36.960  * Looking for test storage...
00:05:36.960  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread
00:05:36.960    22:32:37 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:36.960     22:32:37 thread -- common/autotest_common.sh@1711 -- # lcov --version
00:05:36.960     22:32:37 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:36.960    22:32:37 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:36.960    22:32:37 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:36.960    22:32:37 thread -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:36.960    22:32:37 thread -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:36.960    22:32:37 thread -- scripts/common.sh@336 -- # IFS=.-:
00:05:36.960    22:32:37 thread -- scripts/common.sh@336 -- # read -ra ver1
00:05:36.960    22:32:37 thread -- scripts/common.sh@337 -- # IFS=.-:
00:05:36.960    22:32:37 thread -- scripts/common.sh@337 -- # read -ra ver2
00:05:36.960    22:32:37 thread -- scripts/common.sh@338 -- # local 'op=<'
00:05:36.960    22:32:37 thread -- scripts/common.sh@340 -- # ver1_l=2
00:05:36.960    22:32:37 thread -- scripts/common.sh@341 -- # ver2_l=1
00:05:36.960    22:32:37 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:36.960    22:32:37 thread -- scripts/common.sh@344 -- # case "$op" in
00:05:36.960    22:32:37 thread -- scripts/common.sh@345 -- # : 1
00:05:36.960    22:32:37 thread -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:36.960    22:32:37 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:36.960     22:32:37 thread -- scripts/common.sh@365 -- # decimal 1
00:05:36.960     22:32:37 thread -- scripts/common.sh@353 -- # local d=1
00:05:36.960     22:32:37 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:36.960     22:32:37 thread -- scripts/common.sh@355 -- # echo 1
00:05:36.960    22:32:37 thread -- scripts/common.sh@365 -- # ver1[v]=1
00:05:36.960     22:32:37 thread -- scripts/common.sh@366 -- # decimal 2
00:05:36.960     22:32:37 thread -- scripts/common.sh@353 -- # local d=2
00:05:36.960     22:32:37 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:36.960     22:32:37 thread -- scripts/common.sh@355 -- # echo 2
00:05:36.960    22:32:37 thread -- scripts/common.sh@366 -- # ver2[v]=2
00:05:36.960    22:32:37 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:36.960    22:32:37 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:36.960    22:32:37 thread -- scripts/common.sh@368 -- # return 0
00:05:36.960    22:32:37 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:36.960    22:32:37 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:36.960  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:36.960  		--rc genhtml_branch_coverage=1
00:05:36.960  		--rc genhtml_function_coverage=1
00:05:36.960  		--rc genhtml_legend=1
00:05:36.960  		--rc geninfo_all_blocks=1
00:05:36.960  		--rc geninfo_unexecuted_blocks=1
00:05:36.960  		
00:05:36.960  		'
00:05:36.960    22:32:37 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:36.960  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:36.960  		--rc genhtml_branch_coverage=1
00:05:36.960  		--rc genhtml_function_coverage=1
00:05:36.960  		--rc genhtml_legend=1
00:05:36.960  		--rc geninfo_all_blocks=1
00:05:36.960  		--rc geninfo_unexecuted_blocks=1
00:05:36.960  		
00:05:36.960  		'
00:05:36.960    22:32:37 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:36.960  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:36.960  		--rc genhtml_branch_coverage=1
00:05:36.960  		--rc genhtml_function_coverage=1
00:05:36.960  		--rc genhtml_legend=1
00:05:36.960  		--rc geninfo_all_blocks=1
00:05:36.960  		--rc geninfo_unexecuted_blocks=1
00:05:36.960  		
00:05:36.960  		'
00:05:36.960    22:32:37 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:36.960  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:36.960  		--rc genhtml_branch_coverage=1
00:05:36.960  		--rc genhtml_function_coverage=1
00:05:36.960  		--rc genhtml_legend=1
00:05:36.960  		--rc geninfo_all_blocks=1
00:05:36.960  		--rc geninfo_unexecuted_blocks=1
00:05:36.960  		
00:05:36.960  		'
00:05:36.960   22:32:37 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:05:36.960   22:32:37 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']'
00:05:36.960   22:32:37 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:36.960   22:32:37 thread -- common/autotest_common.sh@10 -- # set +x
00:05:36.960  ************************************
00:05:36.960  START TEST thread_poller_perf
00:05:36.960  ************************************
00:05:36.960   22:32:37 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:05:36.960  [2024-12-10 22:32:37.487876] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:05:36.960  [2024-12-10 22:32:37.487963] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49331 ]
00:05:36.960  [2024-12-10 22:32:37.611604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:37.219  [2024-12-10 22:32:37.749667] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:05:37.219  Running 1000 pollers for 1 seconds with 1 microseconds period.
00:05:38.597  
[2024-12-10T21:32:39.382Z]  ======================================
00:05:38.597  
[2024-12-10T21:32:39.382Z]  busy:2214968052 (cyc)
00:05:38.597  
[2024-12-10T21:32:39.382Z]  total_run_count: 241000
00:05:38.597  
[2024-12-10T21:32:39.382Z]  tsc_hz: 2200000000 (cyc)
00:05:38.597  
[2024-12-10T21:32:39.382Z]  ======================================
00:05:38.597  
[2024-12-10T21:32:39.382Z]  poller_cost: 9190 (cyc), 4177 (nsec)
00:05:38.597  
00:05:38.597  real	0m1.571s
00:05:38.597  user	0m1.442s
00:05:38.597  sys	0m0.122s
00:05:38.597   22:32:39 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:38.597   22:32:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x
00:05:38.597  ************************************
00:05:38.597  END TEST thread_poller_perf
00:05:38.597  ************************************
00:05:38.597   22:32:39 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:05:38.597   22:32:39 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']'
00:05:38.597   22:32:39 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:38.597   22:32:39 thread -- common/autotest_common.sh@10 -- # set +x
00:05:38.597  ************************************
00:05:38.597  START TEST thread_poller_perf
00:05:38.597  ************************************
00:05:38.597   22:32:39 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:05:38.597  [2024-12-10 22:32:39.105330] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:05:38.597  [2024-12-10 22:32:39.105417] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49751 ]
00:05:38.597  [2024-12-10 22:32:39.230899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:38.597  [2024-12-10 22:32:39.367898] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:05:38.597  Running 1000 pollers for 1 seconds with 0 microseconds period.
00:05:39.977  
[2024-12-10T21:32:40.762Z]  ======================================
00:05:39.977  
[2024-12-10T21:32:40.762Z]  busy:2204055264 (cyc)
00:05:39.977  
[2024-12-10T21:32:40.762Z]  total_run_count: 2802000
00:05:39.977  
[2024-12-10T21:32:40.762Z]  tsc_hz: 2200000000 (cyc)
00:05:39.977  
[2024-12-10T21:32:40.762Z]  ======================================
00:05:39.977  
[2024-12-10T21:32:40.762Z]  poller_cost: 786 (cyc), 357 (nsec)
00:05:39.977  
00:05:39.977  real	0m1.566s
00:05:39.977  user	0m1.443s
00:05:39.977  sys	0m0.115s
00:05:39.977   22:32:40 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:39.977   22:32:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x
00:05:39.977  ************************************
00:05:39.977  END TEST thread_poller_perf
00:05:39.977  ************************************
00:05:39.977   22:32:40 thread -- thread/thread.sh@17 -- # [[ y != \y ]]
00:05:39.977  
00:05:39.977  real	0m3.351s
00:05:39.977  user	0m3.006s
00:05:39.977  sys	0m0.344s
00:05:39.977   22:32:40 thread -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:39.977   22:32:40 thread -- common/autotest_common.sh@10 -- # set +x
00:05:39.977  ************************************
00:05:39.977  END TEST thread
00:05:39.977  ************************************
00:05:39.977   22:32:40  -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]]
00:05:39.977   22:32:40  -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/app/cmdline.sh
00:05:39.977   22:32:40  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:39.977   22:32:40  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:39.977   22:32:40  -- common/autotest_common.sh@10 -- # set +x
00:05:39.977  ************************************
00:05:39.977  START TEST app_cmdline
00:05:39.977  ************************************
00:05:39.977   22:32:40 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/app/cmdline.sh
00:05:39.977  * Looking for test storage...
00:05:39.977  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/app
00:05:39.977    22:32:40 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:40.236     22:32:40 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version
00:05:40.236     22:32:40 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:40.236    22:32:40 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:40.236    22:32:40 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:40.236    22:32:40 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:40.236    22:32:40 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:40.236    22:32:40 app_cmdline -- scripts/common.sh@336 -- # IFS=.-:
00:05:40.237    22:32:40 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1
00:05:40.237    22:32:40 app_cmdline -- scripts/common.sh@337 -- # IFS=.-:
00:05:40.237    22:32:40 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2
00:05:40.237    22:32:40 app_cmdline -- scripts/common.sh@338 -- # local 'op=<'
00:05:40.237    22:32:40 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2
00:05:40.237    22:32:40 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1
00:05:40.237    22:32:40 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:40.237    22:32:40 app_cmdline -- scripts/common.sh@344 -- # case "$op" in
00:05:40.237    22:32:40 app_cmdline -- scripts/common.sh@345 -- # : 1
00:05:40.237    22:32:40 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:40.237    22:32:40 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:40.237     22:32:40 app_cmdline -- scripts/common.sh@365 -- # decimal 1
00:05:40.237     22:32:40 app_cmdline -- scripts/common.sh@353 -- # local d=1
00:05:40.237     22:32:40 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:40.237     22:32:40 app_cmdline -- scripts/common.sh@355 -- # echo 1
00:05:40.237    22:32:40 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1
00:05:40.237     22:32:40 app_cmdline -- scripts/common.sh@366 -- # decimal 2
00:05:40.237     22:32:40 app_cmdline -- scripts/common.sh@353 -- # local d=2
00:05:40.237     22:32:40 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:40.237     22:32:40 app_cmdline -- scripts/common.sh@355 -- # echo 2
00:05:40.237    22:32:40 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2
00:05:40.237    22:32:40 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:40.237    22:32:40 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:40.237    22:32:40 app_cmdline -- scripts/common.sh@368 -- # return 0
00:05:40.237    22:32:40 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:40.237    22:32:40 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:40.237  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:40.237  		--rc genhtml_branch_coverage=1
00:05:40.237  		--rc genhtml_function_coverage=1
00:05:40.237  		--rc genhtml_legend=1
00:05:40.237  		--rc geninfo_all_blocks=1
00:05:40.237  		--rc geninfo_unexecuted_blocks=1
00:05:40.237  		
00:05:40.237  		'
00:05:40.237    22:32:40 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:40.237  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:40.237  		--rc genhtml_branch_coverage=1
00:05:40.237  		--rc genhtml_function_coverage=1
00:05:40.237  		--rc genhtml_legend=1
00:05:40.237  		--rc geninfo_all_blocks=1
00:05:40.237  		--rc geninfo_unexecuted_blocks=1
00:05:40.237  		
00:05:40.237  		'
00:05:40.237    22:32:40 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:40.237  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:40.237  		--rc genhtml_branch_coverage=1
00:05:40.237  		--rc genhtml_function_coverage=1
00:05:40.237  		--rc genhtml_legend=1
00:05:40.237  		--rc geninfo_all_blocks=1
00:05:40.237  		--rc geninfo_unexecuted_blocks=1
00:05:40.237  		
00:05:40.237  		'
00:05:40.237    22:32:40 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:40.237  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:40.237  		--rc genhtml_branch_coverage=1
00:05:40.237  		--rc genhtml_function_coverage=1
00:05:40.237  		--rc genhtml_legend=1
00:05:40.237  		--rc geninfo_all_blocks=1
00:05:40.237  		--rc geninfo_unexecuted_blocks=1
00:05:40.237  		
00:05:40.237  		'
00:05:40.237   22:32:40 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT
00:05:40.237   22:32:40 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods
00:05:40.237   22:32:40 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=50034
00:05:40.237   22:32:40 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 50034
00:05:40.237   22:32:40 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 50034 ']'
00:05:40.237   22:32:40 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:40.237   22:32:40 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:40.237   22:32:40 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:40.237  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:40.237   22:32:40 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:40.237   22:32:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:05:40.237  [2024-12-10 22:32:40.928434] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:05:40.237  [2024-12-10 22:32:40.928542] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid50034 ]
00:05:40.496  [2024-12-10 22:32:41.055215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:40.496  [2024-12-10 22:32:41.192465] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:05:41.434   22:32:42 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:41.434   22:32:42 app_cmdline -- common/autotest_common.sh@868 -- # return 0
00:05:41.434   22:32:42 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py spdk_get_version
00:05:41.692  {
00:05:41.692    "version": "SPDK v25.01-pre git sha1 626389917",
00:05:41.692    "fields": {
00:05:41.692      "major": 25,
00:05:41.692      "minor": 1,
00:05:41.692      "patch": 0,
00:05:41.692      "suffix": "-pre",
00:05:41.692      "commit": "626389917"
00:05:41.692    }
00:05:41.692  }
00:05:41.692   22:32:42 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=()
00:05:41.692   22:32:42 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods")
00:05:41.692   22:32:42 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version")
00:05:41.692   22:32:42 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort))
00:05:41.692    22:32:42 app_cmdline -- app/cmdline.sh@26 -- # sort
00:05:41.692    22:32:42 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods
00:05:41.692    22:32:42 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]'
00:05:41.692    22:32:42 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:41.692    22:32:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:05:41.692    22:32:42 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:41.692   22:32:42 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 ))
00:05:41.692   22:32:42 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]]
00:05:41.692   22:32:42 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:05:41.692   22:32:42 app_cmdline -- common/autotest_common.sh@652 -- # local es=0
00:05:41.692   22:32:42 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:05:41.692   22:32:42 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:05:41.692   22:32:42 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:41.692    22:32:42 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:05:41.692   22:32:42 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:41.692    22:32:42 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:05:41.692   22:32:42 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:41.692   22:32:42 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:05:41.692   22:32:42 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py ]]
00:05:41.692   22:32:42 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:05:41.950  request:
00:05:41.950  {
00:05:41.950    "method": "env_dpdk_get_mem_stats",
00:05:41.950    "req_id": 1
00:05:41.950  }
00:05:41.950  Got JSON-RPC error response
00:05:41.950  response:
00:05:41.950  {
00:05:41.950    "code": -32601,
00:05:41.950    "message": "Method not found"
00:05:41.950  }
00:05:41.950   22:32:42 app_cmdline -- common/autotest_common.sh@655 -- # es=1
00:05:41.950   22:32:42 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:05:41.950   22:32:42 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:05:41.950   22:32:42 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:05:41.950   22:32:42 app_cmdline -- app/cmdline.sh@1 -- # killprocess 50034
00:05:41.950   22:32:42 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 50034 ']'
00:05:41.950   22:32:42 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 50034
00:05:41.950    22:32:42 app_cmdline -- common/autotest_common.sh@959 -- # uname
00:05:41.950   22:32:42 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:41.950    22:32:42 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 50034
00:05:41.950   22:32:42 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:41.950   22:32:42 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:41.950   22:32:42 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 50034'
00:05:41.950  killing process with pid 50034
00:05:41.950   22:32:42 app_cmdline -- common/autotest_common.sh@973 -- # kill 50034
00:05:41.950   22:32:42 app_cmdline -- common/autotest_common.sh@978 -- # wait 50034
00:05:45.239  
00:05:45.239  real	0m4.669s
00:05:45.239  user	0m4.898s
00:05:45.239  sys	0m0.677s
00:05:45.239   22:32:45 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:45.239   22:32:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:05:45.239  ************************************
00:05:45.239  END TEST app_cmdline
00:05:45.239  ************************************
00:05:45.239   22:32:45  -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/app/version.sh
00:05:45.239   22:32:45  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:45.239   22:32:45  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:45.239   22:32:45  -- common/autotest_common.sh@10 -- # set +x
00:05:45.239  ************************************
00:05:45.239  START TEST version
00:05:45.239  ************************************
00:05:45.239   22:32:45 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/app/version.sh
00:05:45.239  * Looking for test storage...
00:05:45.239  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/app
00:05:45.239    22:32:45 version -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:45.239     22:32:45 version -- common/autotest_common.sh@1711 -- # lcov --version
00:05:45.239     22:32:45 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:45.239    22:32:45 version -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:45.239    22:32:45 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:45.239    22:32:45 version -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:45.240    22:32:45 version -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:45.240    22:32:45 version -- scripts/common.sh@336 -- # IFS=.-:
00:05:45.240    22:32:45 version -- scripts/common.sh@336 -- # read -ra ver1
00:05:45.240    22:32:45 version -- scripts/common.sh@337 -- # IFS=.-:
00:05:45.240    22:32:45 version -- scripts/common.sh@337 -- # read -ra ver2
00:05:45.240    22:32:45 version -- scripts/common.sh@338 -- # local 'op=<'
00:05:45.240    22:32:45 version -- scripts/common.sh@340 -- # ver1_l=2
00:05:45.240    22:32:45 version -- scripts/common.sh@341 -- # ver2_l=1
00:05:45.240    22:32:45 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:45.240    22:32:45 version -- scripts/common.sh@344 -- # case "$op" in
00:05:45.240    22:32:45 version -- scripts/common.sh@345 -- # : 1
00:05:45.240    22:32:45 version -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:45.240    22:32:45 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:45.240     22:32:45 version -- scripts/common.sh@365 -- # decimal 1
00:05:45.240     22:32:45 version -- scripts/common.sh@353 -- # local d=1
00:05:45.240     22:32:45 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:45.240     22:32:45 version -- scripts/common.sh@355 -- # echo 1
00:05:45.240    22:32:45 version -- scripts/common.sh@365 -- # ver1[v]=1
00:05:45.240     22:32:45 version -- scripts/common.sh@366 -- # decimal 2
00:05:45.240     22:32:45 version -- scripts/common.sh@353 -- # local d=2
00:05:45.240     22:32:45 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:45.240     22:32:45 version -- scripts/common.sh@355 -- # echo 2
00:05:45.240    22:32:45 version -- scripts/common.sh@366 -- # ver2[v]=2
00:05:45.240    22:32:45 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:45.240    22:32:45 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:45.240    22:32:45 version -- scripts/common.sh@368 -- # return 0
00:05:45.240    22:32:45 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:45.240    22:32:45 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:45.240  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:45.240  		--rc genhtml_branch_coverage=1
00:05:45.240  		--rc genhtml_function_coverage=1
00:05:45.240  		--rc genhtml_legend=1
00:05:45.240  		--rc geninfo_all_blocks=1
00:05:45.240  		--rc geninfo_unexecuted_blocks=1
00:05:45.240  		
00:05:45.240  		'
00:05:45.240    22:32:45 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:45.240  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:45.240  		--rc genhtml_branch_coverage=1
00:05:45.240  		--rc genhtml_function_coverage=1
00:05:45.240  		--rc genhtml_legend=1
00:05:45.240  		--rc geninfo_all_blocks=1
00:05:45.240  		--rc geninfo_unexecuted_blocks=1
00:05:45.240  		
00:05:45.240  		'
00:05:45.240    22:32:45 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:45.240  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:45.240  		--rc genhtml_branch_coverage=1
00:05:45.240  		--rc genhtml_function_coverage=1
00:05:45.240  		--rc genhtml_legend=1
00:05:45.240  		--rc geninfo_all_blocks=1
00:05:45.240  		--rc geninfo_unexecuted_blocks=1
00:05:45.240  		
00:05:45.240  		'
00:05:45.240    22:32:45 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:45.240  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:45.240  		--rc genhtml_branch_coverage=1
00:05:45.240  		--rc genhtml_function_coverage=1
00:05:45.240  		--rc genhtml_legend=1
00:05:45.240  		--rc geninfo_all_blocks=1
00:05:45.240  		--rc geninfo_unexecuted_blocks=1
00:05:45.240  		
00:05:45.240  		'
00:05:45.240    22:32:45 version -- app/version.sh@17 -- # get_header_version major
00:05:45.240    22:32:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/include/spdk/version.h
00:05:45.240    22:32:45 version -- app/version.sh@14 -- # cut -f2
00:05:45.240    22:32:45 version -- app/version.sh@14 -- # tr -d '"'
00:05:45.240   22:32:45 version -- app/version.sh@17 -- # major=25
00:05:45.240    22:32:45 version -- app/version.sh@18 -- # get_header_version minor
00:05:45.240    22:32:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/include/spdk/version.h
00:05:45.240    22:32:45 version -- app/version.sh@14 -- # tr -d '"'
00:05:45.240    22:32:45 version -- app/version.sh@14 -- # cut -f2
00:05:45.240   22:32:45 version -- app/version.sh@18 -- # minor=1
00:05:45.240    22:32:45 version -- app/version.sh@19 -- # get_header_version patch
00:05:45.240    22:32:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/include/spdk/version.h
00:05:45.240    22:32:45 version -- app/version.sh@14 -- # cut -f2
00:05:45.240    22:32:45 version -- app/version.sh@14 -- # tr -d '"'
00:05:45.240   22:32:45 version -- app/version.sh@19 -- # patch=0
00:05:45.240    22:32:45 version -- app/version.sh@20 -- # get_header_version suffix
00:05:45.240    22:32:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/include/spdk/version.h
00:05:45.240    22:32:45 version -- app/version.sh@14 -- # cut -f2
00:05:45.240    22:32:45 version -- app/version.sh@14 -- # tr -d '"'
00:05:45.240   22:32:45 version -- app/version.sh@20 -- # suffix=-pre
00:05:45.240   22:32:45 version -- app/version.sh@22 -- # version=25.1
00:05:45.240   22:32:45 version -- app/version.sh@25 -- # (( patch != 0 ))
00:05:45.240   22:32:45 version -- app/version.sh@28 -- # version=25.1rc0
00:05:45.240   22:32:45 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python
00:05:45.240    22:32:45 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)'
00:05:45.240   22:32:45 version -- app/version.sh@30 -- # py_version=25.1rc0
00:05:45.240   22:32:45 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]]
00:05:45.240  
00:05:45.240  real	0m0.153s
00:05:45.240  user	0m0.098s
00:05:45.240  sys	0m0.077s
00:05:45.240   22:32:45 version -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:45.240   22:32:45 version -- common/autotest_common.sh@10 -- # set +x
00:05:45.240  ************************************
00:05:45.240  END TEST version
00:05:45.240  ************************************
00:05:45.240   22:32:45  -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']'
00:05:45.240   22:32:45  -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]]
00:05:45.240    22:32:45  -- spdk/autotest.sh@194 -- # uname -s
00:05:45.240   22:32:45  -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]]
00:05:45.240   22:32:45  -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]]
00:05:45.240   22:32:45  -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]]
00:05:45.240   22:32:45  -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']'
00:05:45.240   22:32:45  -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']'
00:05:45.240   22:32:45  -- spdk/autotest.sh@260 -- # timing_exit lib
00:05:45.240   22:32:45  -- common/autotest_common.sh@732 -- # xtrace_disable
00:05:45.240   22:32:45  -- common/autotest_common.sh@10 -- # set +x
00:05:45.240   22:32:45  -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']'
00:05:45.240   22:32:45  -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']'
00:05:45.240   22:32:45  -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']'
00:05:45.240   22:32:45  -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']'
00:05:45.240   22:32:45  -- spdk/autotest.sh@315 -- # '[' 1 -eq 1 ']'
00:05:45.240   22:32:45  -- spdk/autotest.sh@316 -- # HUGENODE=0
00:05:45.240   22:32:45  -- spdk/autotest.sh@316 -- # run_test vfio_user_qemu /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/vfio_user.sh --iso
00:05:45.240   22:32:45  -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:05:45.240   22:32:45  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:45.240   22:32:45  -- common/autotest_common.sh@10 -- # set +x
00:05:45.240  ************************************
00:05:45.240  START TEST vfio_user_qemu
00:05:45.240  ************************************
00:05:45.240   22:32:45 vfio_user_qemu -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/vfio_user.sh --iso
00:05:45.240  * Looking for test storage...
00:05:45.240  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user
00:05:45.240    22:32:45 vfio_user_qemu -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:45.240     22:32:45 vfio_user_qemu -- common/autotest_common.sh@1711 -- # lcov --version
00:05:45.240     22:32:45 vfio_user_qemu -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:45.240    22:32:45 vfio_user_qemu -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:45.240    22:32:45 vfio_user_qemu -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:45.240    22:32:45 vfio_user_qemu -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:45.240    22:32:45 vfio_user_qemu -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:45.240    22:32:45 vfio_user_qemu -- scripts/common.sh@336 -- # IFS=.-:
00:05:45.240    22:32:45 vfio_user_qemu -- scripts/common.sh@336 -- # read -ra ver1
00:05:45.240    22:32:45 vfio_user_qemu -- scripts/common.sh@337 -- # IFS=.-:
00:05:45.240    22:32:45 vfio_user_qemu -- scripts/common.sh@337 -- # read -ra ver2
00:05:45.240    22:32:45 vfio_user_qemu -- scripts/common.sh@338 -- # local 'op=<'
00:05:45.240    22:32:45 vfio_user_qemu -- scripts/common.sh@340 -- # ver1_l=2
00:05:45.240    22:32:45 vfio_user_qemu -- scripts/common.sh@341 -- # ver2_l=1
00:05:45.240    22:32:45 vfio_user_qemu -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:45.240    22:32:45 vfio_user_qemu -- scripts/common.sh@344 -- # case "$op" in
00:05:45.240    22:32:45 vfio_user_qemu -- scripts/common.sh@345 -- # : 1
00:05:45.240    22:32:45 vfio_user_qemu -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:45.240    22:32:45 vfio_user_qemu -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:45.240     22:32:45 vfio_user_qemu -- scripts/common.sh@365 -- # decimal 1
00:05:45.240     22:32:45 vfio_user_qemu -- scripts/common.sh@353 -- # local d=1
00:05:45.240     22:32:45 vfio_user_qemu -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:45.240     22:32:45 vfio_user_qemu -- scripts/common.sh@355 -- # echo 1
00:05:45.240    22:32:45 vfio_user_qemu -- scripts/common.sh@365 -- # ver1[v]=1
00:05:45.240     22:32:45 vfio_user_qemu -- scripts/common.sh@366 -- # decimal 2
00:05:45.240     22:32:45 vfio_user_qemu -- scripts/common.sh@353 -- # local d=2
00:05:45.240     22:32:45 vfio_user_qemu -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:45.240     22:32:45 vfio_user_qemu -- scripts/common.sh@355 -- # echo 2
00:05:45.240    22:32:45 vfio_user_qemu -- scripts/common.sh@366 -- # ver2[v]=2
00:05:45.240    22:32:45 vfio_user_qemu -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:45.240    22:32:45 vfio_user_qemu -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:45.240    22:32:45 vfio_user_qemu -- scripts/common.sh@368 -- # return 0
00:05:45.240    22:32:45 vfio_user_qemu -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:45.240    22:32:45 vfio_user_qemu -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:45.240  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:45.240  		--rc genhtml_branch_coverage=1
00:05:45.240  		--rc genhtml_function_coverage=1
00:05:45.240  		--rc genhtml_legend=1
00:05:45.241  		--rc geninfo_all_blocks=1
00:05:45.241  		--rc geninfo_unexecuted_blocks=1
00:05:45.241  		
00:05:45.241  		'
00:05:45.241    22:32:45 vfio_user_qemu -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:45.241  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:45.241  		--rc genhtml_branch_coverage=1
00:05:45.241  		--rc genhtml_function_coverage=1
00:05:45.241  		--rc genhtml_legend=1
00:05:45.241  		--rc geninfo_all_blocks=1
00:05:45.241  		--rc geninfo_unexecuted_blocks=1
00:05:45.241  		
00:05:45.241  		'
00:05:45.241    22:32:45 vfio_user_qemu -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:45.241  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:45.241  		--rc genhtml_branch_coverage=1
00:05:45.241  		--rc genhtml_function_coverage=1
00:05:45.241  		--rc genhtml_legend=1
00:05:45.241  		--rc geninfo_all_blocks=1
00:05:45.241  		--rc geninfo_unexecuted_blocks=1
00:05:45.241  		
00:05:45.241  		'
00:05:45.241    22:32:45 vfio_user_qemu -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:45.241  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:45.241  		--rc genhtml_branch_coverage=1
00:05:45.241  		--rc genhtml_function_coverage=1
00:05:45.241  		--rc genhtml_legend=1
00:05:45.241  		--rc geninfo_all_blocks=1
00:05:45.241  		--rc geninfo_unexecuted_blocks=1
00:05:45.241  		
00:05:45.241  		'
00:05:45.241   22:32:45 vfio_user_qemu -- vfio_user/vfio_user.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh
00:05:45.241    22:32:45 vfio_user_qemu -- vfio_user/common.sh@6 -- # : 128
00:05:45.241    22:32:45 vfio_user_qemu -- vfio_user/common.sh@7 -- # : 512
00:05:45.241    22:32:45 vfio_user_qemu -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:05:45.241     22:32:45 vfio_user_qemu -- vhost/common.sh@6 -- # : false
00:05:45.241     22:32:45 vfio_user_qemu -- vhost/common.sh@7 -- # : /root/vhost_test
00:05:45.241     22:32:45 vfio_user_qemu -- vhost/common.sh@8 -- # : /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:05:45.241     22:32:45 vfio_user_qemu -- vhost/common.sh@9 -- # : qemu-img
00:05:45.241      22:32:45 vfio_user_qemu -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:05:45.241     22:32:45 vfio_user_qemu -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:05:45.241     22:32:45 vfio_user_qemu -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:05:45.241     22:32:45 vfio_user_qemu -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:05:45.241     22:32:45 vfio_user_qemu -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:05:45.241     22:32:45 vfio_user_qemu -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:05:45.241     22:32:45 vfio_user_qemu -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:05:45.241       22:32:45 vfio_user_qemu -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/vfio_user.sh
00:05:45.241      22:32:45 vfio_user_qemu -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user
00:05:45.241     22:32:45 vfio_user_qemu -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user
00:05:45.241     22:32:45 vfio_user_qemu -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:05:45.241     22:32:45 vfio_user_qemu -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:05:45.241     22:32:45 vfio_user_qemu -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:05:45.241     22:32:45 vfio_user_qemu -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:05:45.241     22:32:45 vfio_user_qemu -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:05:45.241      22:32:45 vfio_user_qemu -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:05:45.241      22:32:45 vfio_user_qemu -- common/autotest.config@2 -- # vhost_0_main_core=0
00:05:45.241      22:32:45 vfio_user_qemu -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:05:45.241      22:32:45 vfio_user_qemu -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:05:45.241      22:32:45 vfio_user_qemu -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:05:45.241      22:32:45 vfio_user_qemu -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:05:45.241      22:32:45 vfio_user_qemu -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:05:45.241      22:32:45 vfio_user_qemu -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:05:45.241      22:32:45 vfio_user_qemu -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:05:45.241      22:32:45 vfio_user_qemu -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:05:45.241      22:32:45 vfio_user_qemu -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:05:45.241      22:32:45 vfio_user_qemu -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:05:45.241      22:32:45 vfio_user_qemu -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:05:45.241      22:32:45 vfio_user_qemu -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:05:45.241      22:32:45 vfio_user_qemu -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:05:45.241      22:32:45 vfio_user_qemu -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:05:45.241      22:32:45 vfio_user_qemu -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:05:45.241      22:32:45 vfio_user_qemu -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:05:45.241      22:32:45 vfio_user_qemu -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:05:45.241      22:32:45 vfio_user_qemu -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:05:45.241      22:32:45 vfio_user_qemu -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:05:45.241      22:32:45 vfio_user_qemu -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:05:45.241      22:32:45 vfio_user_qemu -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:05:45.241      22:32:45 vfio_user_qemu -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:05:45.241      22:32:45 vfio_user_qemu -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:05:45.241      22:32:45 vfio_user_qemu -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:05:45.241     22:32:45 vfio_user_qemu -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:05:45.241      22:32:45 vfio_user_qemu -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:05:45.241      22:32:45 vfio_user_qemu -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:05:45.241      22:32:45 vfio_user_qemu -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:05:45.241      22:32:45 vfio_user_qemu -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:05:45.241      22:32:45 vfio_user_qemu -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:05:45.241      22:32:45 vfio_user_qemu -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:05:45.241       22:32:45 vfio_user_qemu -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:05:45.241        22:32:45 vfio_user_qemu -- scheduler/cgroups.sh@244 -- # check_cgroup
00:05:45.241        22:32:45 vfio_user_qemu -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:05:45.241        22:32:45 vfio_user_qemu -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:05:45.241        22:32:45 vfio_user_qemu -- scheduler/cgroups.sh@10 -- # echo 2
00:05:45.241       22:32:45 vfio_user_qemu -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:05:45.241    22:32:45 vfio_user_qemu -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:05:45.241    22:32:45 vfio_user_qemu -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]]
00:05:45.241    22:32:45 vfio_user_qemu -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:05:45.241   22:32:45 vfio_user_qemu -- vfio_user/vfio_user.sh@11 -- # echo 'Running SPDK vfio-user fio autotest...'
00:05:45.241  Running SPDK vfio-user fio autotest...
00:05:45.241   22:32:45 vfio_user_qemu -- vfio_user/vfio_user.sh@13 -- # vhosttestinit
00:05:45.241   22:32:45 vfio_user_qemu -- vhost/common.sh@37 -- # '[' iso == iso ']'
00:05:45.241   22:32:45 vfio_user_qemu -- vhost/common.sh@38 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh
00:05:46.176  0000:00:04.7 (8086 6f27): Already using the vfio-pci driver
00:05:46.176  0000:00:04.6 (8086 6f26): Already using the vfio-pci driver
00:05:46.176  0000:00:04.5 (8086 6f25): Already using the vfio-pci driver
00:05:46.176  0000:00:04.4 (8086 6f24): Already using the vfio-pci driver
00:05:46.176  0000:00:04.3 (8086 6f23): Already using the vfio-pci driver
00:05:46.176  0000:00:04.2 (8086 6f22): Already using the vfio-pci driver
00:05:46.176  0000:00:04.1 (8086 6f21): Already using the vfio-pci driver
00:05:46.176  0000:00:04.0 (8086 6f20): Already using the vfio-pci driver
00:05:46.176  0000:80:04.7 (8086 6f27): Already using the vfio-pci driver
00:05:46.176  0000:80:04.6 (8086 6f26): Already using the vfio-pci driver
00:05:46.176  0000:80:04.5 (8086 6f25): Already using the vfio-pci driver
00:05:46.176  0000:80:04.4 (8086 6f24): Already using the vfio-pci driver
00:05:46.176  0000:80:04.3 (8086 6f23): Already using the vfio-pci driver
00:05:46.176  0000:80:04.2 (8086 6f22): Already using the vfio-pci driver
00:05:46.176  0000:80:04.1 (8086 6f21): Already using the vfio-pci driver
00:05:46.176  0000:80:04.0 (8086 6f20): Already using the vfio-pci driver
00:05:46.176  0000:0d:00.0 (8086 0a54): Already using the vfio-pci driver
00:05:46.436   22:32:47 vfio_user_qemu -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]]
00:05:46.436   22:32:47 vfio_user_qemu -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:05:46.436   22:32:47 vfio_user_qemu -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:05:46.436   22:32:47 vfio_user_qemu -- vfio_user/vfio_user.sh@15 -- # run_test vfio_user_nvme_fio /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/vfio_user_fio.sh
00:05:46.436   22:32:47 vfio_user_qemu -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:46.436   22:32:47 vfio_user_qemu -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:46.436   22:32:47 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x
00:05:46.436  ************************************
00:05:46.436  START TEST vfio_user_nvme_fio
00:05:46.436  ************************************
00:05:46.436   22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/vfio_user_fio.sh
00:05:46.436  * Looking for test storage...
00:05:46.436  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme
00:05:46.436    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:46.436     22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1711 -- # lcov --version
00:05:46.436     22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:46.436    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:46.436    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:46.436    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:46.436    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:46.436    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@336 -- # IFS=.-:
00:05:46.436    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@336 -- # read -ra ver1
00:05:46.436    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@337 -- # IFS=.-:
00:05:46.436    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@337 -- # read -ra ver2
00:05:46.436    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@338 -- # local 'op=<'
00:05:46.436    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@340 -- # ver1_l=2
00:05:46.436    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@341 -- # ver2_l=1
00:05:46.436    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:46.436    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@344 -- # case "$op" in
00:05:46.436    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@345 -- # : 1
00:05:46.436    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:46.436    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:46.436     22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@365 -- # decimal 1
00:05:46.436     22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@353 -- # local d=1
00:05:46.436     22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:46.436     22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@355 -- # echo 1
00:05:46.436    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@365 -- # ver1[v]=1
00:05:46.436     22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@366 -- # decimal 2
00:05:46.436     22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@353 -- # local d=2
00:05:46.436     22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:46.436     22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@355 -- # echo 2
00:05:46.436    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@366 -- # ver2[v]=2
00:05:46.436    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:46.436    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:46.436    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@368 -- # return 0
00:05:46.436    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:46.436    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:46.436  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:46.436  		--rc genhtml_branch_coverage=1
00:05:46.436  		--rc genhtml_function_coverage=1
00:05:46.436  		--rc genhtml_legend=1
00:05:46.436  		--rc geninfo_all_blocks=1
00:05:46.436  		--rc geninfo_unexecuted_blocks=1
00:05:46.436  		
00:05:46.436  		'
00:05:46.436    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:46.436  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:46.436  		--rc genhtml_branch_coverage=1
00:05:46.436  		--rc genhtml_function_coverage=1
00:05:46.436  		--rc genhtml_legend=1
00:05:46.436  		--rc geninfo_all_blocks=1
00:05:46.436  		--rc geninfo_unexecuted_blocks=1
00:05:46.436  		
00:05:46.436  		'
00:05:46.436    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:46.436  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:46.436  		--rc genhtml_branch_coverage=1
00:05:46.436  		--rc genhtml_function_coverage=1
00:05:46.436  		--rc genhtml_legend=1
00:05:46.436  		--rc geninfo_all_blocks=1
00:05:46.436  		--rc geninfo_unexecuted_blocks=1
00:05:46.436  		
00:05:46.436  		'
00:05:46.436    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:46.436  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:46.436  		--rc genhtml_branch_coverage=1
00:05:46.436  		--rc genhtml_function_coverage=1
00:05:46.436  		--rc genhtml_legend=1
00:05:46.436  		--rc geninfo_all_blocks=1
00:05:46.436  		--rc geninfo_unexecuted_blocks=1
00:05:46.436  		
00:05:46.436  		'
00:05:46.436   22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh
00:05:46.436    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/common.sh@6 -- # : 128
00:05:46.436    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/common.sh@7 -- # : 512
00:05:46.436    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:05:46.436     22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@6 -- # : false
00:05:46.436     22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@7 -- # : /root/vhost_test
00:05:46.436     22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@8 -- # : /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:05:46.436     22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@9 -- # : qemu-img
00:05:46.436      22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:05:46.436     22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:05:46.436     22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:05:46.436     22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:05:46.436     22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:05:46.436     22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:05:46.436     22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:05:46.436       22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/vfio_user_fio.sh
00:05:46.436      22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme
00:05:46.436     22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme
00:05:46.436     22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:05:46.437     22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:05:46.437     22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:05:46.437     22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:05:46.437     22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:05:46.437      22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:05:46.437      22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@2 -- # vhost_0_main_core=0
00:05:46.437      22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:05:46.437      22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:05:46.437      22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:05:46.437      22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:05:46.437      22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:05:46.437      22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:05:46.437      22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:05:46.437      22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:05:46.437      22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:05:46.437      22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:05:46.437      22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:05:46.437      22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:05:46.437      22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:05:46.437      22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:05:46.437      22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:05:46.437      22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:05:46.437      22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:05:46.437      22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:05:46.437      22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:05:46.437      22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:05:46.437      22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:05:46.437      22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:05:46.437      22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:05:46.437      22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:05:46.437     22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:05:46.437      22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:05:46.437      22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:05:46.437      22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:05:46.437      22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:05:46.437      22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:05:46.437      22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:05:46.437       22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:05:46.437        22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/cgroups.sh@244 -- # check_cgroup
00:05:46.437        22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:05:46.437        22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:05:46.437        22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/cgroups.sh@10 -- # echo 2
00:05:46.437       22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:05:46.437    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:05:46.437    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]]
00:05:46.437    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:05:46.437   22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/common.sh
00:05:46.437   22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/autotest.config
00:05:46.437    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@1 -- # vhost_0_reactor_mask='[0-3]'
00:05:46.437    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@2 -- # vhost_0_main_core=0
00:05:46.437    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@4 -- # VM_0_qemu_mask=4-5
00:05:46.437    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:05:46.437    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@7 -- # VM_1_qemu_mask=6-7
00:05:46.437    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:05:46.437    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@10 -- # VM_2_qemu_mask=8-9
00:05:46.437    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:05:46.437    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@13 -- # get_vhost_dir 0
00:05:46.437    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@105 -- # local vhost_name=0
00:05:46.437    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:05:46.437    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:05:46.437   22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@13 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:05:46.437   22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@15 -- # fio_bin=--fio-bin=/usr/src/fio-static/fio
00:05:46.437   22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@16 -- # vm_no=2
00:05:46.437   22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@18 -- # trap clean_vfio_user EXIT
00:05:46.437   22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@19 -- # vhosttestinit
00:05:46.437   22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@37 -- # '[' '' == iso ']'
00:05:46.437   22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]]
00:05:46.437   22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:05:46.437   22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:05:46.437   22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@21 -- # timing_enter start_vfio_user
00:05:46.437   22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:05:46.437   22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:05:46.437   22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@22 -- # vfio_user_run 0
00:05:46.437   22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@11 -- # local vhost_name=0
00:05:46.437   22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@12 -- # local vfio_user_dir nvmf_pid_file rpc_py
00:05:46.437    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@14 -- # get_vhost_dir 0
00:05:46.437    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@105 -- # local vhost_name=0
00:05:46.437    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:05:46.437    22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:05:46.437   22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@14 -- # vfio_user_dir=/root/vhost_test/vhost/0
00:05:46.437   22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@15 -- # nvmf_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:05:46.437   22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@16 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:05:46.437   22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@18 -- # mkdir -p /root/vhost_test/vhost/0
00:05:46.437   22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@20 -- # timing_enter vfio_user_start
00:05:46.437   22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:05:46.437   22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:05:46.437   22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@22 -- # nvmfpid=51913
00:05:46.437   22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@23 -- # echo 51913
00:05:46.437   22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/nvmf_tgt -r /root/vhost_test/vhost/0/rpc.sock -m 0xf -s 512
00:05:46.437   22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@25 -- # echo 'Process pid: 51913'
00:05:46.437  Process pid: 51913
00:05:46.437   22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@26 -- # echo 'waiting for app to run...'
00:05:46.437  waiting for app to run...
00:05:46.437   22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@27 -- # waitforlisten 51913 /root/vhost_test/vhost/0/rpc.sock
00:05:46.437   22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@835 -- # '[' -z 51913 ']'
00:05:46.437   22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@839 -- # local rpc_addr=/root/vhost_test/vhost/0/rpc.sock
00:05:46.437   22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:46.437   22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...'
00:05:46.437  Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...
00:05:46.437   22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:46.437   22:32:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:05:46.696  [2024-12-10 22:32:47.300220] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:05:46.696  [2024-12-10 22:32:47.300331] [ DPDK EAL parameters: nvmf --no-shconf -c 0xf -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid51913 ]
00:05:46.697  EAL: No free 2048 kB hugepages reported on node 1
00:05:46.955  [2024-12-10 22:32:47.619241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:05:47.214  [2024-12-10 22:32:47.759839] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:05:47.214  [2024-12-10 22:32:47.759884] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:05:47.214  [2024-12-10 22:32:47.759947] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:05:47.214  [2024-12-10 22:32:47.759966] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:05:47.472   22:32:48 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:47.472   22:32:48 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@868 -- # return 0
00:05:47.472   22:32:48 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@29 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_create_transport -t VFIOUSER
00:05:47.731   22:32:48 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@30 -- # timing_exit vfio_user_start
00:05:47.731   22:32:48 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:05:47.731   22:32:48 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:05:47.731    22:32:48 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@27 -- # seq 0 2
00:05:47.731   22:32:48 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@27 -- # for i in $(seq 0 $vm_no)
00:05:47.731   22:32:48 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@28 -- # vm_muser_dir=/root/vhost_test/vms/0/muser
00:05:47.731   22:32:48 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@29 -- # rm -rf /root/vhost_test/vms/0/muser
00:05:47.731   22:32:48 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@30 -- # mkdir -p /root/vhost_test/vms/0/muser/domain/muser0/0
00:05:47.732   22:32:48 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@32 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_create_subsystem nqn.2019-07.io.spdk:cnode0 -s SPDK000 -a
00:05:47.990   22:32:48 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@33 -- # (( i == vm_no ))
00:05:47.990   22:32:48 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_malloc_create 128 512 -b Malloc0
00:05:48.249  Malloc0
00:05:48.249   22:32:48 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@38 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode0 Malloc0
00:05:48.508   22:32:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@40 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode0 -t VFIOUSER -a /root/vhost_test/vms/0/muser/domain/muser0/0 -s 0
00:05:48.767   22:32:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@27 -- # for i in $(seq 0 $vm_no)
00:05:48.767   22:32:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@28 -- # vm_muser_dir=/root/vhost_test/vms/1/muser
00:05:48.767   22:32:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@29 -- # rm -rf /root/vhost_test/vms/1/muser
00:05:48.767   22:32:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@30 -- # mkdir -p /root/vhost_test/vms/1/muser/domain/muser1/1
00:05:48.767   22:32:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@32 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -s SPDK001 -a
00:05:48.767   22:32:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@33 -- # (( i == vm_no ))
00:05:48.767   22:32:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_malloc_create 128 512 -b Malloc1
00:05:49.335  Malloc1
00:05:49.335   22:32:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@38 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1
00:05:49.335   22:32:50 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@40 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /root/vhost_test/vms/1/muser/domain/muser1/1 -s 0
00:05:49.594   22:32:50 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@27 -- # for i in $(seq 0 $vm_no)
00:05:49.594   22:32:50 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@28 -- # vm_muser_dir=/root/vhost_test/vms/2/muser
00:05:49.594   22:32:50 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@29 -- # rm -rf /root/vhost_test/vms/2/muser
00:05:49.594   22:32:50 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@30 -- # mkdir -p /root/vhost_test/vms/2/muser/domain/muser2/2
00:05:49.594   22:32:50 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@32 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -s SPDK002 -a
00:05:49.853   22:32:50 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@33 -- # (( i == vm_no ))
00:05:49.853   22:32:50 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/gen_nvme.sh
00:05:49.853   22:32:50 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock load_subsystem_config
00:05:53.138   22:32:53 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@35 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Nvme0n1
00:05:53.138   22:32:53 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@40 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /root/vhost_test/vms/2/muser/domain/muser2/2 -s 0
00:05:53.399   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@43 -- # timing_exit start_vfio_user
00:05:53.399   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:05:53.399   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:05:53.399   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@45 -- # used_vms=
00:05:53.399   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@46 -- # timing_enter launch_vms
00:05:53.399   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:05:53.399   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:05:53.399    22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@47 -- # seq 0 2
00:05:53.399   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@47 -- # for i in $(seq 0 $vm_no)
00:05:53.399   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@48 -- # vm_setup --disk-type=vfio_user --force=0 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --memory=768 --disks=0
00:05:53.399   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@518 -- # xtrace_disable
00:05:53.399   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:05:53.399  WARN: removing existing VM in '/root/vhost_test/vms/0'
00:05:53.399  INFO: Creating new VM in /root/vhost_test/vms/0
00:05:53.399  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:05:53.399  INFO: TASK MASK: 4-5
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@671 -- # local node_num=0
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@672 -- # local boot_disk_present=false
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:05:53.660  INFO: NUMA NODE: 0
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@677 -- # [[ -n '' ]]
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@686 -- # [[ -z '' ]]
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@701 -- # IFS=,
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@701 -- # read -r disk disk_type _
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@702 -- # [[ -z '' ]]
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@702 -- # disk_type=vfio_user
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@704 -- # case $disk_type in
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@758 -- # notice 'using socket /root/vhost_test/vms/0/domain/muser0/0/cntrl'
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/0/domain/muser0/0/cntrl'
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/0/domain/muser0/0/cntrl'
00:05:53.660  INFO: using socket /root/vhost_test/vms/0/domain/muser0/0/cntrl
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@759 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/$vm_num/muser/domain/muser$disk/$disk/cntrl")
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@760 -- # [[ 0 == '' ]]
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@780 -- # [[ -n '' ]]
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@785 -- # (( 0 ))
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/0/run.sh'
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/0/run.sh'
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/0/run.sh'
00:05:53.660  INFO: Saving to /root/vhost_test/vms/0/run.sh
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@787 -- # cat
00:05:53.660    22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 4-5 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 768 --enable-kvm -cpu host -smp 2 -vga std -vnc :100 -daemonize -object memory-backend-file,id=mem,size=768M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10002,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/0/qemu.pid -serial file:/root/vhost_test/vms/0/serial.log -D /root/vhost_test/vms/0/qemu.log -chardev file,path=/root/vhost_test/vms/0/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10000-:22,hostfwd=tcp::10001-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/0/muser/domain/muser0/0/cntrl
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/0/run.sh
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@827 -- # echo 10000
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@828 -- # echo 10001
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@829 -- # echo 10002
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/0/migration_port
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@832 -- # [[ -z '' ]]
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@834 -- # echo 10004
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@835 -- # echo 100
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@837 -- # [[ -z '' ]]
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@838 -- # [[ -z '' ]]
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@49 -- # used_vms+=' 0'
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@47 -- # for i in $(seq 0 $vm_no)
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@48 -- # vm_setup --disk-type=vfio_user --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --memory=768 --disks=1
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@518 -- # xtrace_disable
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:05:53.660  WARN: removing existing VM in '/root/vhost_test/vms/1'
00:05:53.660  INFO: Creating new VM in /root/vhost_test/vms/1
00:05:53.660  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:05:53.660  INFO: TASK MASK: 6-7
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@671 -- # local node_num=0
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@672 -- # local boot_disk_present=false
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:05:53.660   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:05:53.661  INFO: NUMA NODE: 0
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@677 -- # [[ -n '' ]]
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@686 -- # [[ -z '' ]]
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@701 -- # IFS=,
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@701 -- # read -r disk disk_type _
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@702 -- # [[ -z '' ]]
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@702 -- # disk_type=vfio_user
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@704 -- # case $disk_type in
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@758 -- # notice 'using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:05:53.661  INFO: using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@759 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/$vm_num/muser/domain/muser$disk/$disk/cntrl")
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@760 -- # [[ 1 == '' ]]
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@780 -- # [[ -n '' ]]
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@785 -- # (( 0 ))
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:05:53.661  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@787 -- # cat
00:05:53.661    22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 768 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=768M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/1/muser/domain/muser1/1/cntrl
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@827 -- # echo 10100
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@828 -- # echo 10101
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@829 -- # echo 10102
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@832 -- # [[ -z '' ]]
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@834 -- # echo 10104
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@835 -- # echo 101
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@837 -- # [[ -z '' ]]
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@838 -- # [[ -z '' ]]
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@49 -- # used_vms+=' 1'
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@47 -- # for i in $(seq 0 $vm_no)
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@48 -- # vm_setup --disk-type=vfio_user --force=2 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --memory=768 --disks=2
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@518 -- # xtrace_disable
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:05:53.661  WARN: removing existing VM in '/root/vhost_test/vms/2'
00:05:53.661  INFO: Creating new VM in /root/vhost_test/vms/2
00:05:53.661  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:05:53.661  INFO: TASK MASK: 8-9
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@671 -- # local node_num=0
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@672 -- # local boot_disk_present=false
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:05:53.661  INFO: NUMA NODE: 0
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@677 -- # [[ -n '' ]]
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@686 -- # [[ -z '' ]]
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@701 -- # IFS=,
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@701 -- # read -r disk disk_type _
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@702 -- # [[ -z '' ]]
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@702 -- # disk_type=vfio_user
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@704 -- # case $disk_type in
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@758 -- # notice 'using socket /root/vhost_test/vms/2/domain/muser2/2/cntrl'
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/2/domain/muser2/2/cntrl'
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:05:53.661   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/2/domain/muser2/2/cntrl'
00:05:53.662  INFO: using socket /root/vhost_test/vms/2/domain/muser2/2/cntrl
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@759 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/$vm_num/muser/domain/muser$disk/$disk/cntrl")
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@760 -- # [[ 2 == '' ]]
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@780 -- # [[ -n '' ]]
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@785 -- # (( 0 ))
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/2/run.sh'
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/2/run.sh'
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/2/run.sh'
00:05:53.662  INFO: Saving to /root/vhost_test/vms/2/run.sh
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@787 -- # cat
00:05:53.662    22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 8-9 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 768 --enable-kvm -cpu host -smp 2 -vga std -vnc :102 -daemonize -object memory-backend-file,id=mem,size=768M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10202,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/2/qemu.pid -serial file:/root/vhost_test/vms/2/serial.log -D /root/vhost_test/vms/2/qemu.log -chardev file,path=/root/vhost_test/vms/2/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10200-:22,hostfwd=tcp::10201-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/2/muser/domain/muser2/2/cntrl
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/2/run.sh
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@827 -- # echo 10200
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@828 -- # echo 10201
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@829 -- # echo 10202
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/2/migration_port
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@832 -- # [[ -z '' ]]
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@834 -- # echo 10204
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@835 -- # echo 102
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@837 -- # [[ -z '' ]]
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@838 -- # [[ -z '' ]]
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@49 -- # used_vms+=' 2'
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@52 -- # vm_run 0 1 2
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@843 -- # local run_all=false
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@844 -- # local vms_to_run=
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@846 -- # getopts a-: optchar
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@856 -- # false
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@859 -- # shift 0
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@860 -- # for vm in "$@"
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@861 -- # vm_num_is_valid 0
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/0/run.sh ]]
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@866 -- # vms_to_run+=' 0'
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@860 -- # for vm in "$@"
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@861 -- # vm_num_is_valid 0
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@860 -- # for vm in "$@"
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@861 -- # vm_num_is_valid 0
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/2/run.sh ]]
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@866 -- # vms_to_run+=' 2'
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@871 -- # vm_is_running 0
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@373 -- # return 1
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/0/run.sh'
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/0/run.sh'
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/0/run.sh'
00:05:53.662  INFO: running /root/vhost_test/vms/0/run.sh
00:05:53.662   22:32:54 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@877 -- # /root/vhost_test/vms/0/run.sh
00:05:53.662  Running VM in /root/vhost_test/vms/0
00:05:54.230  Waiting for QEMU pid file
00:05:54.230  [2024-12-10 22:32:55.000383] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/0/muser/domain/muser0/0: enabling controller
00:05:55.166  === qemu.log ===
00:05:55.166  === qemu.log ===
00:05:55.166   22:32:55 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:05:55.166   22:32:55 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@871 -- # vm_is_running 1
00:05:55.166   22:32:55 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:05:55.166   22:32:55 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:55.166   22:32:55 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:05:55.166   22:32:55 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:05:55.166   22:32:55 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:05:55.166   22:32:55 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@373 -- # return 1
00:05:55.166   22:32:55 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:05:55.166   22:32:55 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:05:55.166   22:32:55 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:05:55.166   22:32:55 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:05:55.166   22:32:55 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:05:55.166   22:32:55 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:05:55.166   22:32:55 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:05:55.166   22:32:55 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:05:55.166  INFO: running /root/vhost_test/vms/1/run.sh
00:05:55.166   22:32:55 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:05:55.166  Running VM in /root/vhost_test/vms/1
00:05:55.425  Waiting for QEMU pid file
00:05:55.684  [2024-12-10 22:32:56.388514] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: enabling controller
00:05:56.620  === qemu.log ===
00:05:56.620  === qemu.log ===
00:05:56.620   22:32:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:05:56.620   22:32:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@871 -- # vm_is_running 2
00:05:56.620   22:32:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 2
00:05:56.620   22:32:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:56.620   22:32:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:05:56.620   22:32:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/2
00:05:56.620   22:32:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/2/qemu.pid ]]
00:05:56.620   22:32:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@373 -- # return 1
00:05:56.620   22:32:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/2/run.sh'
00:05:56.620   22:32:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/2/run.sh'
00:05:56.620   22:32:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:05:56.620   22:32:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:05:56.620   22:32:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:05:56.620   22:32:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:05:56.620   22:32:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:05:56.620   22:32:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/2/run.sh'
00:05:56.620  INFO: running /root/vhost_test/vms/2/run.sh
00:05:56.620   22:32:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@877 -- # /root/vhost_test/vms/2/run.sh
00:05:56.620  Running VM in /root/vhost_test/vms/2
00:05:56.878  Waiting for QEMU pid file
00:05:57.137  [2024-12-10 22:32:57.712674] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/2/muser/domain/muser2/2: enabling controller
00:05:57.704  === qemu.log ===
00:05:57.704  === qemu.log ===
00:05:57.704   22:32:58 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@53 -- # vm_wait_for_boot 60 0 1 2
00:05:57.704   22:32:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@913 -- # assert_number 60
00:05:57.704   22:32:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:05:57.704   22:32:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@281 -- # return 0
00:05:57.704   22:32:58 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@915 -- # xtrace_disable
00:05:57.704   22:32:58 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:05:57.704  INFO: Waiting for VMs to boot
00:05:57.704  INFO: waiting for VM0 (/root/vhost_test/vms/0)
00:06:09.908  [2024-12-10 22:33:10.302813] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/0/muser/domain/muser0/0: disabling controller
00:06:09.908  [2024-12-10 22:33:10.311880] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/0/muser/domain/muser0/0: disabling controller
00:06:09.908  [2024-12-10 22:33:10.315913] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/0/muser/domain/muser0/0: enabling controller
00:06:13.201  [2024-12-10 22:33:13.720730] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:06:13.201  [2024-12-10 22:33:13.729783] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:06:13.201  [2024-12-10 22:33:13.733810] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: enabling controller
00:06:18.471  [2024-12-10 22:33:18.237936] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/2/muser/domain/muser2/2: disabling controller
00:06:18.471  [2024-12-10 22:33:18.247001] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/2/muser/domain/muser2/2: disabling controller
00:06:18.471  [2024-12-10 22:33:18.251041] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/2/muser/domain/muser2/2: enabling controller
00:06:18.730  
00:06:18.730  INFO: VM0 ready
00:06:18.730  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:06:18.989  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:06:19.926  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:06:23.215  
00:06:23.215  INFO: VM1 ready
00:06:23.215  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:06:23.215  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:06:24.153  INFO: waiting for VM2 (/root/vhost_test/vms/2)
00:06:27.449  
00:06:27.449  INFO: VM2 ready
00:06:27.449  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:06:27.449  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:06:28.386  INFO: all VMs ready
00:06:28.386   22:33:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@973 -- # return 0
00:06:28.386   22:33:28 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@55 -- # timing_exit launch_vms
00:06:28.386   22:33:28 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:06:28.386   22:33:28 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:06:28.386   22:33:28 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@57 -- # timing_enter run_vm_cmd
00:06:28.386   22:33:28 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:06:28.386   22:33:28 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:06:28.386   22:33:28 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@59 -- # fio_disks=
00:06:28.386   22:33:28 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@61 -- # for vm_num in $used_vms
00:06:28.386   22:33:28 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@62 -- # qemu_mask_param=VM_0_qemu_mask
00:06:28.386   22:33:28 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@64 -- # host_name=VM-0-4-5
00:06:28.386   22:33:28 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@65 -- # vm_exec 0 'hostname VM-0-4-5'
00:06:28.386   22:33:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:06:28.386   22:33:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:06:28.386   22:33:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:28.386   22:33:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=0
00:06:28.386   22:33:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:06:28.386    22:33:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:06:28.386    22:33:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:06:28.386    22:33:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:06:28.386    22:33:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:28.386    22:33:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:06:28.386    22:33:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:06:28.386   22:33:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'hostname VM-0-4-5'
00:06:28.386  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:06:28.386   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@66 -- # vm_start_fio_server --fio-bin=/usr/src/fio-static/fio 0
00:06:28.386   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@977 -- # local OPTIND optchar
00:06:28.386   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@978 -- # local readonly=
00:06:28.386   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@979 -- # local fio_bin=
00:06:28.386   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:06:28.386   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@981 -- # case "$optchar" in
00:06:28.386   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@983 -- # case "$OPTARG" in
00:06:28.386   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@984 -- # local fio_bin=/usr/src/fio-static/fio
00:06:28.386   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:06:28.386   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@993 -- # shift 1
00:06:28.386   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@994 -- # for vm_num in "$@"
00:06:28.386   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@995 -- # notice 'Starting fio server on VM0'
00:06:28.386   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Starting fio server on VM0'
00:06:28.386   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:06:28.386   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:06:28.386   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:06:28.386   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:06:28.386   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:06:28.386   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Starting fio server on VM0'
00:06:28.386  INFO: Starting fio server on VM0
00:06:28.386   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@996 -- # [[ /usr/src/fio-static/fio != '' ]]
00:06:28.386   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@997 -- # vm_exec 0 'cat > /root/fio; chmod +x /root/fio'
00:06:28.386   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:06:28.386   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:06:28.386   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:28.386   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=0
00:06:28.386   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:06:28.386    22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:06:28.386    22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:06:28.386    22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:06:28.386    22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:28.386    22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:06:28.386    22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:06:28.386   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'cat > /root/fio; chmod +x /root/fio'
00:06:28.386  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:06:28.644   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@998 -- # vm_exec 0 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:06:28.644   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:06:28.644   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:06:28.644   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:28.644   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=0
00:06:28.644   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:06:28.644    22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:06:28.644    22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:06:28.644    22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:06:28.644    22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:28.644    22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:06:28.644    22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:06:28.644   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:06:28.902  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:06:28.902   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@67 -- # vm_check_nvme_location 0
00:06:28.902    22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # vm_exec 0 'grep -l SPDK /sys/class/nvme/*/model'
00:06:28.902    22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:06:28.902    22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:06:28.902    22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # awk -F/ '{print $5"n1"}'
00:06:28.902    22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:28.902    22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=0
00:06:28.902    22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:06:28.902     22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:06:28.902     22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:06:28.902     22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:06:28.902     22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:28.902     22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:06:28.902     22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:06:28.902    22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l SPDK /sys/class/nvme/*/model'
00:06:28.902  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:06:29.174   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # SCSI_DISK=nvme0n1
00:06:29.174   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1046 -- # [[ -z nvme0n1 ]]
00:06:29.174    22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@69 -- # printf :/dev/%s nvme0n1
00:06:29.174   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@69 -- # fio_disks+=' --vm=0:/dev/nvme0n1'
00:06:29.174   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@61 -- # for vm_num in $used_vms
00:06:29.174   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@62 -- # qemu_mask_param=VM_1_qemu_mask
00:06:29.174   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@64 -- # host_name=VM-1-6-7
00:06:29.174   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@65 -- # vm_exec 1 'hostname VM-1-6-7'
00:06:29.174   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:06:29.174   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:29.174   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:29.174   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=1
00:06:29.174   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:06:29.174    22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:06:29.174    22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:06:29.174    22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:29.174    22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:29.174    22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:06:29.174    22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:06:29.174   22:33:29 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'hostname VM-1-6-7'
00:06:29.174  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:06:29.433   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@66 -- # vm_start_fio_server --fio-bin=/usr/src/fio-static/fio 1
00:06:29.433   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@977 -- # local OPTIND optchar
00:06:29.433   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@978 -- # local readonly=
00:06:29.433   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@979 -- # local fio_bin=
00:06:29.433   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:06:29.433   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@981 -- # case "$optchar" in
00:06:29.433   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@983 -- # case "$OPTARG" in
00:06:29.433   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@984 -- # local fio_bin=/usr/src/fio-static/fio
00:06:29.433   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:06:29.433   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@993 -- # shift 1
00:06:29.433   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@994 -- # for vm_num in "$@"
00:06:29.433   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@995 -- # notice 'Starting fio server on VM1'
00:06:29.433   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Starting fio server on VM1'
00:06:29.433   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:06:29.433   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:06:29.433   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:06:29.433   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:06:29.433   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:06:29.433   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Starting fio server on VM1'
00:06:29.433  INFO: Starting fio server on VM1
00:06:29.433   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@996 -- # [[ /usr/src/fio-static/fio != '' ]]
00:06:29.433   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@997 -- # vm_exec 1 'cat > /root/fio; chmod +x /root/fio'
00:06:29.433   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:06:29.433   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:29.433   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:29.433   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=1
00:06:29.433   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:06:29.433    22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:06:29.433    22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:06:29.433    22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:29.433    22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:29.433    22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:06:29.433    22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:06:29.433   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/fio; chmod +x /root/fio'
00:06:29.433  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:06:29.692   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@998 -- # vm_exec 1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:06:29.692   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:06:29.692   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:29.692   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:29.692   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=1
00:06:29.692   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:06:29.692    22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:06:29.692    22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:06:29.692    22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:29.692    22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:29.692    22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:06:29.692    22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:06:29.692   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:06:29.692  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:06:29.950   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@67 -- # vm_check_nvme_location 1
00:06:29.950    22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # vm_exec 1 'grep -l SPDK /sys/class/nvme/*/model'
00:06:29.950    22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:06:29.951    22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:29.951    22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # awk -F/ '{print $5"n1"}'
00:06:29.951    22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:29.951    22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=1
00:06:29.951    22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:06:29.951     22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:06:29.951     22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:06:29.951     22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:29.951     22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:29.951     22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:06:29.951     22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:06:29.951    22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'grep -l SPDK /sys/class/nvme/*/model'
00:06:29.951  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:06:30.210   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # SCSI_DISK=nvme0n1
00:06:30.210   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1046 -- # [[ -z nvme0n1 ]]
00:06:30.210    22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@69 -- # printf :/dev/%s nvme0n1
00:06:30.210   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@69 -- # fio_disks+=' --vm=1:/dev/nvme0n1'
00:06:30.210   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@61 -- # for vm_num in $used_vms
00:06:30.210   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@62 -- # qemu_mask_param=VM_2_qemu_mask
00:06:30.210   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@64 -- # host_name=VM-2-8-9
00:06:30.210   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@65 -- # vm_exec 2 'hostname VM-2-8-9'
00:06:30.210   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 2
00:06:30.210   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:30.210   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:30.210   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=2
00:06:30.210   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:06:30.210    22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 2
00:06:30.210    22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 2
00:06:30.210    22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:30.210    22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:30.210    22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/2
00:06:30.210    22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/2/ssh_socket
00:06:30.210   22:33:30 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 'hostname VM-2-8-9'
00:06:30.210  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:06:30.470   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@66 -- # vm_start_fio_server --fio-bin=/usr/src/fio-static/fio 2
00:06:30.470   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@977 -- # local OPTIND optchar
00:06:30.470   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@978 -- # local readonly=
00:06:30.470   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@979 -- # local fio_bin=
00:06:30.470   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:06:30.470   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@981 -- # case "$optchar" in
00:06:30.470   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@983 -- # case "$OPTARG" in
00:06:30.470   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@984 -- # local fio_bin=/usr/src/fio-static/fio
00:06:30.470   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:06:30.470   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@993 -- # shift 1
00:06:30.470   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@994 -- # for vm_num in "$@"
00:06:30.470   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@995 -- # notice 'Starting fio server on VM2'
00:06:30.470   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Starting fio server on VM2'
00:06:30.470   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:06:30.470   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:06:30.470   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:06:30.470   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:06:30.470   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:06:30.470   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Starting fio server on VM2'
00:06:30.470  INFO: Starting fio server on VM2
00:06:30.470   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@996 -- # [[ /usr/src/fio-static/fio != '' ]]
00:06:30.470   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@997 -- # vm_exec 2 'cat > /root/fio; chmod +x /root/fio'
00:06:30.470   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 2
00:06:30.470   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:30.470   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:30.470   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=2
00:06:30.470   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:06:30.470    22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 2
00:06:30.470    22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 2
00:06:30.470    22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:30.470    22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:30.470    22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/2
00:06:30.470    22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/2/ssh_socket
00:06:30.470   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 'cat > /root/fio; chmod +x /root/fio'
00:06:30.470  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:06:30.729   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@998 -- # vm_exec 2 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:06:30.729   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 2
00:06:30.729   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:30.729   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:30.729   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=2
00:06:30.729   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:06:30.729    22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 2
00:06:30.729    22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 2
00:06:30.729    22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:30.729    22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:30.729    22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/2
00:06:30.729    22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/2/ssh_socket
00:06:30.729   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:06:30.729  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:06:30.988   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@67 -- # vm_check_nvme_location 2
00:06:30.988    22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # vm_exec 2 'grep -l SPDK /sys/class/nvme/*/model'
00:06:30.988    22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 2
00:06:30.988    22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # awk -F/ '{print $5"n1"}'
00:06:30.988    22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:30.988    22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:30.988    22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=2
00:06:30.988    22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:06:30.988     22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 2
00:06:30.988     22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 2
00:06:30.988     22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:30.988     22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:30.988     22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/2
00:06:30.988     22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/2/ssh_socket
00:06:30.988    22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 'grep -l SPDK /sys/class/nvme/*/model'
00:06:30.988  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:06:31.247   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # SCSI_DISK=nvme0n1
00:06:31.247   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1046 -- # [[ -z nvme0n1 ]]
00:06:31.247    22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@69 -- # printf :/dev/%s nvme0n1
00:06:31.247   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@69 -- # fio_disks+=' --vm=2:/dev/nvme0n1'
00:06:31.247   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@72 -- # job_file=default_integrity.job
00:06:31.247   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@73 -- # run_fio --fio-bin=/usr/src/fio-static/fio --job-file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job --out=/root/vhost_test/fio_results --vm=0:/dev/nvme0n1 --vm=1:/dev/nvme0n1 --vm=2:/dev/nvme0n1
00:06:31.247   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1053 -- # local arg
00:06:31.247   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1054 -- # local job_file=
00:06:31.247   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1055 -- # local fio_bin=
00:06:31.247   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1056 -- # vms=()
00:06:31.247   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1056 -- # local vms
00:06:31.247   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1057 -- # local out=
00:06:31.247   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1058 -- # local vm
00:06:31.247   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1059 -- # local run_server_mode=true
00:06:31.247   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1060 -- # local run_plugin_mode=false
00:06:31.247   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1061 -- # local fio_start_cmd
00:06:31.247   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1062 -- # local fio_output_format=normal
00:06:31.247   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1063 -- # local fio_gtod_reduce=false
00:06:31.247   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1064 -- # local wait_for_fio=true
00:06:31.247   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:06:31.247   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:06:31.247   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1069 -- # local fio_bin=/usr/src/fio-static/fio
00:06:31.247   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:06:31.248   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:06:31.248   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1068 -- # local job_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:06:31.248   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:06:31.248   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:06:31.248   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1072 -- # local out=/root/vhost_test/fio_results
00:06:31.248   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1073 -- # mkdir -p /root/vhost_test/fio_results
00:06:31.248   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:06:31.248   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:06:31.248   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1070 -- # vms+=("${arg#*=}")
00:06:31.248   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:06:31.248   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:06:31.248   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1070 -- # vms+=("${arg#*=}")
00:06:31.248   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:06:31.248   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:06:31.248   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1070 -- # vms+=("${arg#*=}")
00:06:31.248   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1092 -- # [[ -n /usr/src/fio-static/fio ]]
00:06:31.248   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1092 -- # [[ ! -r /usr/src/fio-static/fio ]]
00:06:31.248   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1097 -- # [[ -z /usr/src/fio-static/fio ]]
00:06:31.248   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1101 -- # [[ ! -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job ]]
00:06:31.248   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1106 -- # fio_start_cmd='/usr/src/fio-static/fio --eta=never '
00:06:31.248   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1108 -- # local job_fname
00:06:31.248    22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1109 -- # basename /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:06:31.248   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1109 -- # job_fname=default_integrity.job
00:06:31.248   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1110 -- # log_fname=default_integrity.log
00:06:31.248   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1111 -- # fio_start_cmd+=' --output=/root/vhost_test/fio_results/default_integrity.log --output-format=normal '
00:06:31.248   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1114 -- # for vm in "${vms[@]}"
00:06:31.248   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1115 -- # local vm_num=0
00:06:31.248   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1116 -- # local vmdisks=/dev/nvme0n1
00:06:31.248   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1118 -- # sed 's@filename=@filename=/dev/nvme0n1@;s@description=\(.*\)@description=\1 (VM=0)@' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:06:31.248   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1119 -- # vm_exec 0 'cat > /root/default_integrity.job'
00:06:31.248   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:06:31.248   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:06:31.248   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:31.248   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=0
00:06:31.248   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:06:31.248    22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:06:31.248    22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:06:31.248    22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:06:31.248    22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:31.248    22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:06:31.248    22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:06:31.248   22:33:31 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'cat > /root/default_integrity.job'
00:06:31.248  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:06:31.508   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1121 -- # false
00:06:31.508   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1125 -- # vm_exec 0 cat /root/default_integrity.job
00:06:31.508   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:06:31.508   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:06:31.508   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:31.508   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=0
00:06:31.508   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:06:31.508    22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:06:31.508    22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:06:31.508    22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:06:31.508    22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:31.508    22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:06:31.508    22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:06:31.508   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 cat /root/default_integrity.job
00:06:31.508  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:06:31.767  [global]
00:06:31.768  blocksize_range=4k-512k
00:06:31.768  iodepth=512
00:06:31.768  iodepth_batch=128
00:06:31.768  iodepth_low=256
00:06:31.768  ioengine=libaio
00:06:31.768  size=1G
00:06:31.768  io_size=4G
00:06:31.768  filename=/dev/nvme0n1
00:06:31.768  group_reporting
00:06:31.768  thread
00:06:31.768  numjobs=1
00:06:31.768  direct=1
00:06:31.768  rw=randwrite
00:06:31.768  do_verify=1
00:06:31.768  verify=md5
00:06:31.768  verify_backlog=1024
00:06:31.768  fsync_on_close=1
00:06:31.768  verify_state_save=0
00:06:31.768  [nvme-host]
00:06:31.768   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1127 -- # true
00:06:31.768    22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1128 -- # vm_fio_socket 0
00:06:31.768    22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@326 -- # vm_num_is_valid 0
00:06:31.768    22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:06:31.768    22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:31.768    22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@327 -- # local vm_dir=/root/vhost_test/vms/0
00:06:31.768    22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@329 -- # cat /root/vhost_test/vms/0/fio_socket
00:06:31.768   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1128 -- # fio_start_cmd+='--client=127.0.0.1,10001 --remote-config /root/default_integrity.job '
00:06:31.768   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1131 -- # true
00:06:31.768   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1114 -- # for vm in "${vms[@]}"
00:06:31.768   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1115 -- # local vm_num=1
00:06:31.768   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1116 -- # local vmdisks=/dev/nvme0n1
00:06:31.768   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1118 -- # sed 's@filename=@filename=/dev/nvme0n1@;s@description=\(.*\)@description=\1 (VM=1)@' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:06:31.768   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1119 -- # vm_exec 1 'cat > /root/default_integrity.job'
00:06:31.768   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:06:31.768   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:31.768   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:31.768   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=1
00:06:31.768   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:06:31.768    22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:06:31.768    22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:06:31.768    22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:31.768    22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:31.768    22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:06:31.768    22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:06:31.768   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/default_integrity.job'
00:06:31.768  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:06:32.027   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1121 -- # false
00:06:32.027   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1125 -- # vm_exec 1 cat /root/default_integrity.job
00:06:32.027   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:06:32.027   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:32.027   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:32.027   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=1
00:06:32.027   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:06:32.027    22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:06:32.027    22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:06:32.027    22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:32.027    22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:32.027    22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:06:32.027    22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:06:32.027   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 cat /root/default_integrity.job
00:06:32.027  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:06:32.287  [global]
00:06:32.287  blocksize_range=4k-512k
00:06:32.287  iodepth=512
00:06:32.287  iodepth_batch=128
00:06:32.287  iodepth_low=256
00:06:32.287  ioengine=libaio
00:06:32.287  size=1G
00:06:32.287  io_size=4G
00:06:32.287  filename=/dev/nvme0n1
00:06:32.287  group_reporting
00:06:32.287  thread
00:06:32.287  numjobs=1
00:06:32.287  direct=1
00:06:32.287  rw=randwrite
00:06:32.287  do_verify=1
00:06:32.287  verify=md5
00:06:32.287  verify_backlog=1024
00:06:32.287  fsync_on_close=1
00:06:32.287  verify_state_save=0
00:06:32.287  [nvme-host]
00:06:32.287   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1127 -- # true
00:06:32.288    22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1128 -- # vm_fio_socket 1
00:06:32.288    22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@326 -- # vm_num_is_valid 1
00:06:32.288    22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:32.288    22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:32.288    22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@327 -- # local vm_dir=/root/vhost_test/vms/1
00:06:32.288    22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@329 -- # cat /root/vhost_test/vms/1/fio_socket
00:06:32.288   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1128 -- # fio_start_cmd+='--client=127.0.0.1,10101 --remote-config /root/default_integrity.job '
00:06:32.288   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1131 -- # true
00:06:32.288   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1114 -- # for vm in "${vms[@]}"
00:06:32.288   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1115 -- # local vm_num=2
00:06:32.288   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1116 -- # local vmdisks=/dev/nvme0n1
00:06:32.288   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1118 -- # sed 's@filename=@filename=/dev/nvme0n1@;s@description=\(.*\)@description=\1 (VM=2)@' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:06:32.288   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1119 -- # vm_exec 2 'cat > /root/default_integrity.job'
00:06:32.288   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 2
00:06:32.288   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:32.288   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:32.288   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=2
00:06:32.288   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:06:32.288    22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 2
00:06:32.288    22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 2
00:06:32.288    22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:32.288    22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:32.288    22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/2
00:06:32.288    22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/2/ssh_socket
00:06:32.288   22:33:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 'cat > /root/default_integrity.job'
00:06:32.288  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:06:32.288   22:33:33 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1121 -- # false
00:06:32.288   22:33:33 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1125 -- # vm_exec 2 cat /root/default_integrity.job
00:06:32.288   22:33:33 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 2
00:06:32.288   22:33:33 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:32.288   22:33:33 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:32.288   22:33:33 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=2
00:06:32.288   22:33:33 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:06:32.288    22:33:33 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 2
00:06:32.288    22:33:33 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 2
00:06:32.288    22:33:33 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:32.288    22:33:33 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:32.288    22:33:33 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/2
00:06:32.288    22:33:33 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/2/ssh_socket
00:06:32.288   22:33:33 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 cat /root/default_integrity.job
00:06:32.547  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:06:32.547  [global]
00:06:32.547  blocksize_range=4k-512k
00:06:32.547  iodepth=512
00:06:32.547  iodepth_batch=128
00:06:32.547  iodepth_low=256
00:06:32.547  ioengine=libaio
00:06:32.547  size=1G
00:06:32.547  io_size=4G
00:06:32.547  filename=/dev/nvme0n1
00:06:32.547  group_reporting
00:06:32.547  thread
00:06:32.547  numjobs=1
00:06:32.547  direct=1
00:06:32.547  rw=randwrite
00:06:32.547  do_verify=1
00:06:32.547  verify=md5
00:06:32.547  verify_backlog=1024
00:06:32.547  fsync_on_close=1
00:06:32.547  verify_state_save=0
00:06:32.547  [nvme-host]
00:06:32.547   22:33:33 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1127 -- # true
00:06:32.547    22:33:33 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1128 -- # vm_fio_socket 2
00:06:32.547    22:33:33 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@326 -- # vm_num_is_valid 2
00:06:32.547    22:33:33 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:32.547    22:33:33 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:32.547    22:33:33 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@327 -- # local vm_dir=/root/vhost_test/vms/2
00:06:32.547    22:33:33 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@329 -- # cat /root/vhost_test/vms/2/fio_socket
00:06:32.547   22:33:33 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1128 -- # fio_start_cmd+='--client=127.0.0.1,10201 --remote-config /root/default_integrity.job '
00:06:32.547   22:33:33 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1131 -- # true
00:06:32.547   22:33:33 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1147 -- # true
00:06:32.547   22:33:33 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1161 -- # /usr/src/fio-static/fio --eta=never --output=/root/vhost_test/fio_results/default_integrity.log --output-format=normal --client=127.0.0.1,10001 --remote-config /root/default_integrity.job --client=127.0.0.1,10101 --remote-config /root/default_integrity.job --client=127.0.0.1,10201 --remote-config /root/default_integrity.job
00:06:47.433   22:33:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1162 -- # sleep 1
00:06:48.371   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1164 -- # [[ normal == \j\s\o\n ]]
00:06:48.371   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1172 -- # [[ ! -n '' ]]
00:06:48.371   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1173 -- # cat /root/vhost_test/fio_results/default_integrity.log
00:06:48.371  hostname=VM-2-8-9, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-3.35, flags=1
00:06:48.371  hostname=VM-1-6-7, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-3.35, flags=1
00:06:48.371  hostname=VM-0-4-5, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-3.35, flags=1
00:06:48.371  <VM-2-8-9> nvme-host: (g=0): rw=randwrite, bs=(R) 4096B-512KiB, (W) 4096B-512KiB, (T) 4096B-512KiB, ioengine=libaio, iodepth=512
00:06:48.371  <VM-1-6-7> nvme-host: (g=0): rw=randwrite, bs=(R) 4096B-512KiB, (W) 4096B-512KiB, (T) 4096B-512KiB, ioengine=libaio, iodepth=512
00:06:48.371  <VM-0-4-5> nvme-host: (g=0): rw=randwrite, bs=(R) 4096B-512KiB, (W) 4096B-512KiB, (T) 4096B-512KiB, ioengine=libaio, iodepth=512
00:06:48.371  <VM-2-8-9> Starting 1 thread
00:06:48.371  <VM-1-6-7> Starting 1 thread
00:06:48.371  <VM-0-4-5> Starting 1 thread
00:06:48.371  <VM-2-8-9> 
00:06:48.371  nvme-host: (groupid=0, jobs=1): err= 0: pid=948: Tue Dec 10 22:33:46 2024
00:06:48.371    read: IOPS=994, BW=167MiB/s (175MB/s)(2048MiB/12281msec)
00:06:48.371      slat (usec): min=43, max=58235, avg=11938.56, stdev=9811.17
00:06:48.371      clat (msec): min=6, max=488, avg=184.82, stdev=97.43
00:06:48.371       lat (msec): min=9, max=497, avg=196.76, stdev=99.94
00:06:48.371      clat percentiles (msec):
00:06:48.371       |  1.00th=[    8],  5.00th=[   28], 10.00th=[   65], 20.00th=[   91],
00:06:48.371       | 30.00th=[  126], 40.00th=[  155], 50.00th=[  180], 60.00th=[  207],
00:06:48.371       | 70.00th=[  234], 80.00th=[  271], 90.00th=[  317], 95.00th=[  351],
00:06:48.371       | 99.00th=[  439], 99.50th=[  460], 99.90th=[  481], 99.95th=[  485],
00:06:48.371       | 99.99th=[  489]
00:06:48.371    write: IOPS=1053, BW=177MiB/s (185MB/s)(2048MiB/11587msec); 0 zone resets
00:06:48.371      slat (usec): min=347, max=110627, avg=30889.50, stdev=20238.90
00:06:48.371      clat (msec): min=13, max=398, avg=151.96, stdev=81.57
00:06:48.371       lat (msec): min=13, max=458, avg=182.85, stdev=88.45
00:06:48.371      clat percentiles (msec):
00:06:48.371       |  1.00th=[   16],  5.00th=[   35], 10.00th=[   49], 20.00th=[   84],
00:06:48.371       | 30.00th=[  100], 40.00th=[  120], 50.00th=[  142], 60.00th=[  161],
00:06:48.371       | 70.00th=[  192], 80.00th=[  224], 90.00th=[  259], 95.00th=[  309],
00:06:48.371       | 99.00th=[  363], 99.50th=[  397], 99.90th=[  397], 99.95th=[  397],
00:06:48.371       | 99.99th=[  397]
00:06:48.371     bw (  KiB/s): min= 4662, max=407248, per=100.00%, avg=220752.32, stdev=121382.18, samples=19
00:06:48.371     iops        : min=   23, max= 2048, avg=1285.00, stdev=725.62, samples=19
00:06:48.371    lat (msec)   : 10=1.13%, 20=1.65%, 50=6.25%, 100=17.39%, 250=53.74%
00:06:48.371    lat (msec)   : 500=19.85%
00:06:48.371    cpu          : usr=81.76%, sys=1.86%, ctx=693, majf=0, minf=34
00:06:48.371    IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.5%, >=64=99.1%
00:06:48.371       submit    : 0=0.0%, 4=0.0%, 8=1.2%, 16=0.0%, 32=0.0%, 64=19.2%, >=64=79.6%
00:06:48.371       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:06:48.371       issued rwts: total=12208,12208,0,0 short=0,0,0,0 dropped=0,0,0,0
00:06:48.371       latency   : target=0, window=0, percentile=100.00%, depth=512
00:06:48.371  
00:06:48.371  Run status group 0 (all jobs):
00:06:48.371     READ: bw=167MiB/s (175MB/s), 167MiB/s-167MiB/s (175MB/s-175MB/s), io=2048MiB (2147MB), run=12281-12281msec
00:06:48.371    WRITE: bw=177MiB/s (185MB/s), 177MiB/s-177MiB/s (185MB/s-185MB/s), io=2048MiB (2147MB), run=11587-11587msec
00:06:48.371  
00:06:48.371  Disk stats (read/write):
00:06:48.371    nvme0n1: ios=5/0, merge=0/0, ticks=19/0, in_queue=19, util=21.76%
00:06:48.371  <VM-1-6-7> 
00:06:48.371  nvme-host: (groupid=0, jobs=1): err= 0: pid=949: Tue Dec 10 22:33:47 2024
00:06:48.371    read: IOPS=843, BW=164MiB/s (172MB/s)(2072MiB/12615msec)
00:06:48.371      slat (usec): min=25, max=33090, avg=11095.77, stdev=7245.90
00:06:48.371      clat (usec): min=635, max=57414, avg=23272.61, stdev=13177.75
00:06:48.371       lat (usec): min=3319, max=58335, avg=34368.39, stdev=12260.83
00:06:48.371      clat percentiles (usec):
00:06:48.371       |  1.00th=[ 2089],  5.00th=[ 3359], 10.00th=[ 7373], 20.00th=[12518],
00:06:48.371       | 30.00th=[13960], 40.00th=[16581], 50.00th=[21890], 60.00th=[27395],
00:06:48.371       | 70.00th=[31065], 80.00th=[34866], 90.00th=[43779], 95.00th=[44827],
00:06:48.371       | 99.00th=[53216], 99.50th=[57410], 99.90th=[57410], 99.95th=[57410],
00:06:48.371       | 99.99th=[57410]
00:06:48.371    write: IOPS=1735, BW=338MiB/s (354MB/s)(2072MiB/6131msec); 0 zone resets
00:06:48.371      slat (usec): min=318, max=132832, avg=31777.55, stdev=20846.78
00:06:48.371      clat (msec): min=3, max=292, avg=73.25, stdev=56.22
00:06:48.371       lat (msec): min=4, max=301, avg=105.03, stdev=64.25
00:06:48.371      clat percentiles (msec):
00:06:48.371       |  1.00th=[    5],  5.00th=[    7], 10.00th=[   11], 20.00th=[   14],
00:06:48.371       | 30.00th=[   21], 40.00th=[   51], 50.00th=[   65], 60.00th=[   75],
00:06:48.371       | 70.00th=[  111], 80.00th=[  132], 90.00th=[  155], 95.00th=[  174],
00:06:48.371       | 99.00th=[  201], 99.50th=[  205], 99.90th=[  292], 99.95th=[  292],
00:06:48.371       | 99.99th=[  292]
00:06:48.371     bw (  KiB/s): min=157144, max=314288, per=49.20%, avg=170239.33, stdev=44366.44, samples=24
00:06:48.371     iops        : min=  788, max= 1576, avg=853.67, stdev=222.48, samples=24
00:06:48.371    lat (usec)   : 750=0.36%
00:06:48.371    lat (msec)   : 4=3.48%, 10=5.92%, 20=28.76%, 50=30.28%, 100=14.27%
00:06:48.371    lat (msec)   : 250=16.86%, 500=0.08%
00:06:48.371    cpu          : usr=85.18%, sys=2.00%, ctx=1027, majf=0, minf=16
00:06:48.371    IO depths    : 1=0.0%, 2=0.6%, 4=1.2%, 8=1.8%, 16=3.6%, 32=7.8%, >=64=84.8%
00:06:48.371       submit    : 0=0.0%, 4=1.8%, 8=1.8%, 16=3.2%, 32=6.4%, 64=11.8%, >=64=75.0%
00:06:48.371       complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0%
00:06:48.371       issued rwts: total=10638,10638,0,0 short=0,0,0,0 dropped=0,0,0,0
00:06:48.371       latency   : target=0, window=0, percentile=100.00%, depth=512
00:06:48.371  
00:06:48.371  Run status group 0 (all jobs):
00:06:48.371     READ: bw=164MiB/s (172MB/s), 164MiB/s-164MiB/s (172MB/s-172MB/s), io=2072MiB (2172MB), run=12615-12615msec
00:06:48.371    WRITE: bw=338MiB/s (354MB/s), 338MiB/s-338MiB/s (354MB/s-354MB/s), io=2072MiB (2172MB), run=6131-6131msec
00:06:48.371  
00:06:48.371  Disk stats (read/write):
00:06:48.371    nvme0n1: ios=80/0, merge=0/0, ticks=5/0, in_queue=5, util=25.96%
00:06:48.371  <VM-0-4-5> 
00:06:48.371  nvme-host: (groupid=0, jobs=1): err= 0: pid=950: Tue Dec 10 22:33:47 2024
00:06:48.371    read: IOPS=826, BW=161MiB/s (169MB/s)(2072MiB/12865msec)
00:06:48.371      slat (usec): min=23, max=31526, avg=12689.94, stdev=8438.84
00:06:48.371      clat (usec): min=2100, max=77656, avg=28683.73, stdev=16199.60
00:06:48.371       lat (usec): min=8121, max=78731, avg=41373.67, stdev=15479.23
00:06:48.371      clat percentiles (usec):
00:06:48.371       |  1.00th=[ 2147],  5.00th=[ 7439], 10.00th=[ 9503], 20.00th=[13566],
00:06:48.371       | 30.00th=[17957], 40.00th=[20317], 50.00th=[26084], 60.00th=[33162],
00:06:48.371       | 70.00th=[39060], 80.00th=[42206], 90.00th=[50594], 95.00th=[55837],
00:06:48.371       | 99.00th=[77071], 99.50th=[78119], 99.90th=[78119], 99.95th=[78119],
00:06:48.371       | 99.99th=[78119]
00:06:48.371    write: IOPS=1723, BW=336MiB/s (352MB/s)(2072MiB/6172msec); 0 zone resets
00:06:48.371      slat (usec): min=258, max=108917, avg=32368.06, stdev=21181.10
00:06:48.371      clat (msec): min=3, max=241, avg=76.27, stdev=56.47
00:06:48.371       lat (msec): min=4, max=286, avg=108.64, stdev=64.43
00:06:48.371      clat percentiles (msec):
00:06:48.371       |  1.00th=[    6],  5.00th=[    9], 10.00th=[   12], 20.00th=[   17],
00:06:48.371       | 30.00th=[   30], 40.00th=[   46], 50.00th=[   64], 60.00th=[   82],
00:06:48.371       | 70.00th=[  112], 80.00th=[  129], 90.00th=[  161], 95.00th=[  186],
00:06:48.371       | 99.00th=[  201], 99.50th=[  218], 99.90th=[  230], 99.95th=[  230],
00:06:48.371       | 99.99th=[  241]
00:06:48.371     bw (  KiB/s): min=156830, max=314288, per=47.47%, avg=163175.92, stdev=30821.00, samples=26
00:06:48.371     iops        : min=  786, max= 1576, avg=818.23, stdev=154.56, samples=26
00:06:48.371    lat (msec)   : 4=1.27%, 10=7.77%, 20=22.26%, 50=33.88%, 100=17.28%
00:06:48.371    lat (msec)   : 250=17.54%
00:06:48.371    cpu          : usr=81.97%, sys=1.76%, ctx=1213, majf=0, minf=16
00:06:48.371    IO depths    : 1=0.0%, 2=0.6%, 4=1.2%, 8=1.8%, 16=3.6%, 32=7.8%, >=64=84.8%
00:06:48.371       submit    : 0=0.0%, 4=1.8%, 8=1.8%, 16=3.2%, 32=6.4%, 64=11.8%, >=64=75.0%
00:06:48.371       complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0%
00:06:48.371       issued rwts: total=10638,10638,0,0 short=0,0,0,0 dropped=0,0,0,0
00:06:48.371       latency   : target=0, window=0, percentile=100.00%, depth=512
00:06:48.371  
00:06:48.371  Run status group 0 (all jobs):
00:06:48.371     READ: bw=161MiB/s (169MB/s), 161MiB/s-161MiB/s (169MB/s-169MB/s), io=2072MiB (2172MB), run=12865-12865msec
00:06:48.371    WRITE: bw=336MiB/s (352MB/s), 336MiB/s-336MiB/s (352MB/s-352MB/s), io=2072MiB (2172MB), run=6172-6172msec
00:06:48.371  
00:06:48.371  Disk stats (read/write):
00:06:48.371    nvme0n1: ios=80/0, merge=0/0, ticks=43/0, in_queue=43, util=27.89%
00:06:48.371  All clients: (groupid=0, jobs=3): err= 0: pid=0: Tue Dec 10 22:33:47 2024
00:06:48.371    read: IOPS=2602, BW=481Mi (505M)(6191MiB/12865msec)
00:06:48.371      slat (usec): min=23, max=58235, avg=11909.52, stdev=8648.73
00:06:48.371      clat (usec): min=635, max=488615, avg=83890.37, stdev=97205.52
00:06:48.371       lat (msec): min=3, max=497, avg=95.80, stdev=98.09
00:06:48.371    write: IOPS=2889, BW=534Mi (560M)(6191MiB/11587msec); 0 zone resets
00:06:48.371      slat (usec): min=258, max=132832, avg=31641.38, stdev=20743.69
00:06:48.371      clat (msec): min=3, max=398, avg=102.91, stdev=76.32
00:06:48.371       lat (msec): min=4, max=458, avg=134.55, stdev=82.60
00:06:48.371     bw (  KiB/s): min=318636, max=1035824, per=63.64%, avg=554167.57, stdev=70107.74, samples=69
00:06:48.371     iops        : min= 1597, max= 5200, avg=2956.90, stdev=406.08, samples=69
00:06:48.371    lat (usec)   : 750=0.11%
00:06:48.371    lat (msec)   : 4=1.51%, 10=4.76%, 20=16.81%, 50=22.66%, 100=16.36%
00:06:48.371    lat (msec)   : 250=30.53%, 500=7.26%
00:06:48.371    cpu          : usr=82.98%, sys=1.87%, ctx=2933, majf=0, minf=66
00:06:48.371    IO depths    : 1=0.0%, 2=0.4%, 4=0.8%, 8=1.1%, 16=2.3%, 32=5.2%, >=64=90.0%
00:06:48.371       submit    : 0=0.0%, 4=1.2%, 8=1.6%, 16=2.1%, 32=4.1%, 64=14.4%, >=64=76.6%
00:06:48.371       complete  : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5%
00:06:48.372       issued rwts: total=33484,33484,0,0 short=0,0,0,0 dropped=0,0,0,0
00:06:48.372   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@75 -- # timing_exit run_vm_cmd
00:06:48.372   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:06:48.372   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:06:48.372   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@77 -- # vm_shutdown_all
00:06:48.372   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:06:48.372   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:06:48.372    22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@489 -- # vm_list_all
00:06:48.372    22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@466 -- # vms=()
00:06:48.372    22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@466 -- # local vms
00:06:48.372    22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:06:48.372    22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@468 -- # (( 3 > 0 ))
00:06:48.372    22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/0 /root/vhost_test/vms/1 /root/vhost_test/vms/2
00:06:48.372   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:06:48.372   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@492 -- # vm_shutdown 0
00:06:48.372   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@417 -- # vm_num_is_valid 0
00:06:48.372   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:06:48.372   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:48.372   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/0
00:06:48.372   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/0 ]]
00:06:48.372   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@424 -- # vm_is_running 0
00:06:48.372   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:06:48.372   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:06:48.372   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:48.372   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:06:48.372   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:06:48.372   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@376 -- # local vm_pid
00:06:48.372    22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/0/qemu.pid
00:06:48.372   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # vm_pid=53430
00:06:48.372   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@379 -- # /bin/kill -0 53430
00:06:48.372   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@380 -- # return 0
00:06:48.372   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/0'
00:06:48.372   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/0'
00:06:48.372   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:06:48.372   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:06:48.372   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:06:48.372   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:06:48.372   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:06:48.372   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/0'
00:06:48.372  INFO: Shutting down virtual machine /root/vhost_test/vms/0
00:06:48.372   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@432 -- # set +e
00:06:48.372   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@433 -- # vm_exec 0 'nohup sh -c '\''shutdown -h -P now'\'''
00:06:48.372   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:06:48.372   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:06:48.372   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:48.372   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=0
00:06:48.372   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:06:48.372    22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:06:48.372    22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:06:48.372    22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:06:48.372    22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:48.372    22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:06:48.372    22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:06:48.372   22:33:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:06:48.372  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@434 -- # notice 'VM0 is shutting down - wait a while to complete'
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'VM0 is shutting down - wait a while to complete'
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: VM0 is shutting down - wait a while to complete'
00:06:48.372  INFO: VM0 is shutting down - wait a while to complete
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@435 -- # set -e
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@492 -- # vm_shutdown 1
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@424 -- # vm_is_running 1
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@376 -- # local vm_pid
00:06:48.372    22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # vm_pid=53666
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@379 -- # /bin/kill -0 53666
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@380 -- # return 0
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:06:48.372  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@432 -- # set +e
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=1
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:06:48.372    22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:06:48.372    22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:06:48.372    22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:48.372    22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:48.372    22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:06:48.372    22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:06:48.372   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:06:48.632  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:06:48.632  INFO: VM1 is shutting down - wait a while to complete
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@435 -- # set -e
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@492 -- # vm_shutdown 2
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@417 -- # vm_num_is_valid 2
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/2
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/2 ]]
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@424 -- # vm_is_running 2
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 2
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/2
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/2/qemu.pid ]]
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@376 -- # local vm_pid
00:06:48.632    22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/2/qemu.pid
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # vm_pid=53899
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@379 -- # /bin/kill -0 53899
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@380 -- # return 0
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/2'
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/2'
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/2'
00:06:48.632  INFO: Shutting down virtual machine /root/vhost_test/vms/2
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@432 -- # set +e
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@433 -- # vm_exec 2 'nohup sh -c '\''shutdown -h -P now'\'''
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 2
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=2
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:06:48.632    22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 2
00:06:48.632    22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 2
00:06:48.632    22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:48.632    22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:48.632    22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/2
00:06:48.632    22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/2/ssh_socket
00:06:48.632   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:06:48.891  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:06:48.891   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@434 -- # notice 'VM2 is shutting down - wait a while to complete'
00:06:48.891   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'VM2 is shutting down - wait a while to complete'
00:06:48.891   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:06:48.891   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:06:48.891   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:06:48.891   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:06:48.891   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:06:48.891   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: VM2 is shutting down - wait a while to complete'
00:06:48.891  INFO: VM2 is shutting down - wait a while to complete
00:06:48.891   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@435 -- # set -e
00:06:48.891   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:06:48.891   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:06:48.891   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:06:48.891   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:06:48.891   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:06:48.891   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:06:48.891   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:06:48.891   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:06:48.891  INFO: Waiting for VMs to shutdown...
00:06:48.891   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@496 -- # (( timeo-- > 0 && 3 > 0 ))
00:06:48.891   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:06:48.891   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # vm_is_running 0
00:06:48.891   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:06:48.891   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:06:48.891   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:48.891   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:06:48.891   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:06:48.891   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@376 -- # local vm_pid
00:06:48.891    22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/0/qemu.pid
00:06:48.891   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # vm_pid=53430
00:06:48.891   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@379 -- # /bin/kill -0 53430
00:06:48.891   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@380 -- # return 0
00:06:48.892   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:06:48.892   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # vm_is_running 1
00:06:48.892   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:06:48.892   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:48.892   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:48.892   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:06:48.892   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:06:48.892   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@376 -- # local vm_pid
00:06:48.892    22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:06:48.892   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # vm_pid=53666
00:06:48.892   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@379 -- # /bin/kill -0 53666
00:06:48.892   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@380 -- # return 0
00:06:48.892   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:06:48.892   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # vm_is_running 2
00:06:48.892   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 2
00:06:48.892   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:48.892   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:48.892   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/2
00:06:48.892   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/2/qemu.pid ]]
00:06:48.892   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@376 -- # local vm_pid
00:06:48.892    22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/2/qemu.pid
00:06:48.892   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # vm_pid=53899
00:06:48.892   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@379 -- # /bin/kill -0 53899
00:06:48.892   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@380 -- # return 0
00:06:48.892   22:33:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@500 -- # sleep 1
00:06:49.459  [2024-12-10 22:33:50.129971] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/0/muser/domain/muser0/0: disabling controller
00:06:49.719  [2024-12-10 22:33:50.426605] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:06:49.978   22:33:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@496 -- # (( timeo-- > 0 && 3 > 0 ))
00:06:49.978   22:33:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:06:49.978   22:33:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # vm_is_running 0
00:06:49.978   22:33:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:06:49.978   22:33:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:06:49.978   22:33:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:49.978   22:33:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:06:49.978   22:33:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:06:49.978   22:33:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@373 -- # return 1
00:06:49.978   22:33:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:06:49.978   22:33:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:06:49.978   22:33:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # vm_is_running 1
00:06:49.978   22:33:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:06:49.978   22:33:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:49.978   22:33:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:49.978   22:33:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:06:49.978   22:33:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:06:49.978   22:33:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@373 -- # return 1
00:06:49.978   22:33:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:06:49.978   22:33:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:06:49.978   22:33:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # vm_is_running 2
00:06:49.978   22:33:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 2
00:06:49.978   22:33:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:49.978   22:33:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:49.978   22:33:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/2
00:06:49.978   22:33:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/2/qemu.pid ]]
00:06:49.978   22:33:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@376 -- # local vm_pid
00:06:49.978    22:33:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/2/qemu.pid
00:06:49.978   22:33:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # vm_pid=53899
00:06:49.978   22:33:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@379 -- # /bin/kill -0 53899
00:06:49.978   22:33:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@380 -- # return 0
00:06:49.978   22:33:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@500 -- # sleep 1
00:06:50.236  [2024-12-10 22:33:50.788857] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/2/muser/domain/muser2/2: disabling controller
00:06:51.172   22:33:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:06:51.172   22:33:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:06:51.172   22:33:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # vm_is_running 2
00:06:51.172   22:33:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 2
00:06:51.172   22:33:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:51.172   22:33:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:51.172   22:33:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/2
00:06:51.172   22:33:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/2/qemu.pid ]]
00:06:51.172   22:33:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@373 -- # return 1
00:06:51.172   22:33:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:06:51.172   22:33:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@500 -- # sleep 1
00:06:52.109   22:33:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:06:52.109   22:33:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:06:52.109   22:33:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:06:52.109   22:33:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:06:52.109   22:33:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:06:52.109   22:33:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:06:52.109   22:33:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:06:52.109   22:33:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:06:52.109   22:33:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:06:52.109   22:33:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:06:52.109  INFO: All VMs successfully shut down
00:06:52.109   22:33:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@505 -- # return 0
00:06:52.109   22:33:52 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@79 -- # timing_enter clean_vfio_user
00:06:52.109   22:33:52 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:06:52.109   22:33:52 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:06:52.109    22:33:52 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@81 -- # seq 0 2
00:06:52.109   22:33:52 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@81 -- # for i in $(seq 0 $vm_no)
00:06:52.109   22:33:52 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@82 -- # vm_muser_dir=/root/vhost_test/vms/0/muser
00:06:52.109   22:33:52 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@83 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_remove_listener nqn.2019-07.io.spdk:cnode0 -t vfiouser -a /root/vhost_test/vms/0/muser/domain/muser0/0 -s 0
00:06:52.368   22:33:52 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@84 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_delete_subsystem nqn.2019-07.io.spdk:cnode0
00:06:52.368   22:33:53 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@85 -- # (( i == vm_no ))
00:06:52.368   22:33:53 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@88 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_malloc_delete Malloc0
00:06:52.936   22:33:53 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@81 -- # for i in $(seq 0 $vm_no)
00:06:52.936   22:33:53 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@82 -- # vm_muser_dir=/root/vhost_test/vms/1/muser
00:06:52.936   22:33:53 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@83 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_remove_listener nqn.2019-07.io.spdk:cnode1 -t vfiouser -a /root/vhost_test/vms/1/muser/domain/muser1/1 -s 0
00:06:52.936   22:33:53 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@84 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_delete_subsystem nqn.2019-07.io.spdk:cnode1
00:06:53.194   22:33:53 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@85 -- # (( i == vm_no ))
00:06:53.194   22:33:53 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@88 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_malloc_delete Malloc1
00:06:53.759   22:33:54 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@81 -- # for i in $(seq 0 $vm_no)
00:06:53.759   22:33:54 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@82 -- # vm_muser_dir=/root/vhost_test/vms/2/muser
00:06:53.759   22:33:54 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@83 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_remove_listener nqn.2019-07.io.spdk:cnode2 -t vfiouser -a /root/vhost_test/vms/2/muser/domain/muser2/2 -s 0
00:06:54.018   22:33:54 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@84 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_delete_subsystem nqn.2019-07.io.spdk:cnode2
00:06:54.277   22:33:54 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@85 -- # (( i == vm_no ))
00:06:54.277   22:33:54 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@86 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_detach_controller Nvme0
00:06:56.184   22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@92 -- # vhost_kill 0
00:06:56.184   22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@202 -- # local rc=0
00:06:56.184   22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@203 -- # local vhost_name=0
00:06:56.184   22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@205 -- # [[ -z 0 ]]
00:06:56.184   22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@210 -- # local vhost_dir
00:06:56.184    22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@211 -- # get_vhost_dir 0
00:06:56.184    22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@105 -- # local vhost_name=0
00:06:56.184    22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:06:56.184    22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:06:56.184   22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0
00:06:56.184   22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:06:56.184   22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]]
00:06:56.184   22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@219 -- # timing_enter vhost_kill
00:06:56.184   22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:06:56.184   22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:06:56.184   22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@220 -- # local vhost_pid
00:06:56.184    22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@221 -- # cat /root/vhost_test/vhost/0/vhost.pid
00:06:56.184   22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@221 -- # vhost_pid=51913
00:06:56.184   22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@222 -- # notice 'killing vhost (PID 51913) app'
00:06:56.184   22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'killing vhost (PID 51913) app'
00:06:56.184   22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:06:56.184   22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:06:56.184   22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:06:56.184   22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:06:56.184   22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:06:56.184   22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: killing vhost (PID 51913) app'
00:06:56.184  INFO: killing vhost (PID 51913) app
00:06:56.184   22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@224 -- # kill -INT 51913
00:06:56.184   22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@225 -- # notice 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:06:56.184   22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:06:56.184   22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:06:56.184   22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:06:56.184   22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:06:56.184   22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:06:56.184   22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:06:56.184   22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: sent SIGINT to vhost app - waiting 60 seconds to exit'
00:06:56.184  INFO: sent SIGINT to vhost app - waiting 60 seconds to exit
00:06:56.184   22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@226 -- # (( i = 0 ))
00:06:56.184   22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@226 -- # (( i < 60 ))
00:06:56.184   22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@227 -- # kill -0 51913
00:06:56.184   22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@228 -- # echo .
00:06:56.184  .
00:06:56.184   22:33:56 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@229 -- # sleep 1
00:06:57.124   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@226 -- # (( i++ ))
00:06:57.124   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@226 -- # (( i < 60 ))
00:06:57.124   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@227 -- # kill -0 51913
00:06:57.124  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 227: kill: (51913) - No such process
00:06:57.124   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@231 -- # break
00:06:57.124   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@234 -- # kill -0 51913
00:06:57.124  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 234: kill: (51913) - No such process
00:06:57.124   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@239 -- # kill -0 51913
00:06:57.124  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 239: kill: (51913) - No such process
00:06:57.124   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@245 -- # is_pid_child 51913
00:06:57.124   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1686 -- # local pid=51913 _pid
00:06:57.124    22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1685 -- # jobs -pr
00:06:57.124   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1688 -- # read -r _pid
00:06:57.124   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1689 -- # (( pid == _pid ))
00:06:57.124   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1688 -- # read -r _pid
00:06:57.124   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1692 -- # return 1
00:06:57.124   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@257 -- # timing_exit vhost_kill
00:06:57.124   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:06:57.124   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:06:57.124   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@259 -- # rm -rf /root/vhost_test/vhost/0
00:06:57.124   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@261 -- # return 0
00:06:57.124   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@93 -- # timing_exit clean_vfio_user
00:06:57.124   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:06:57.124   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:06:57.124   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@94 -- # vhosttestfini
00:06:57.124   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@54 -- # '[' '' == iso ']'
00:06:57.124   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@1 -- # clean_vfio_user
00:06:57.124   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@6 -- # vm_kill_all
00:06:57.124   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@476 -- # local vm
00:06:57.124    22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@477 -- # vm_list_all
00:06:57.124    22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@466 -- # vms=()
00:06:57.124    22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@466 -- # local vms
00:06:57.125    22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:06:57.125    22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@468 -- # (( 3 > 0 ))
00:06:57.125    22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/0 /root/vhost_test/vms/1 /root/vhost_test/vms/2
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@477 -- # for vm in $(vm_list_all)
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@478 -- # vm_kill 0
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@442 -- # vm_num_is_valid 0
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@443 -- # local vm_dir=/root/vhost_test/vms/0
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@445 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@446 -- # return 0
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@477 -- # for vm in $(vm_list_all)
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@478 -- # vm_kill 1
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@442 -- # vm_num_is_valid 1
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@443 -- # local vm_dir=/root/vhost_test/vms/1
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@445 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@446 -- # return 0
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@477 -- # for vm in $(vm_list_all)
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@478 -- # vm_kill 2
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@442 -- # vm_num_is_valid 2
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@443 -- # local vm_dir=/root/vhost_test/vms/2
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@445 -- # [[ ! -r /root/vhost_test/vms/2/qemu.pid ]]
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@446 -- # return 0
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@481 -- # rm -rf /root/vhost_test/vms
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@7 -- # vhost_kill 0
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@202 -- # local rc=0
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@203 -- # local vhost_name=0
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@205 -- # [[ -z 0 ]]
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@210 -- # local vhost_dir
00:06:57.125    22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@211 -- # get_vhost_dir 0
00:06:57.125    22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@105 -- # local vhost_name=0
00:06:57.125    22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:06:57.125    22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]]
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@215 -- # warning 'no vhost pid file found'
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@90 -- # message WARN 'no vhost pid file found'
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=WARN
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'WARN: no vhost pid file found'
00:06:57.125  WARN: no vhost pid file found
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@216 -- # return 0
00:06:57.125  
00:06:57.125  real	1m10.605s
00:06:57.125  user	4m41.061s
00:06:57.125  sys	0m2.886s
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:06:57.125  ************************************
00:06:57.125  END TEST vfio_user_nvme_fio
00:06:57.125  ************************************
00:06:57.125   22:33:57 vfio_user_qemu -- vfio_user/vfio_user.sh@16 -- # run_test vfio_user_nvme_restart_vm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/vfio_user_restart_vm.sh
00:06:57.125   22:33:57 vfio_user_qemu -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:57.125   22:33:57 vfio_user_qemu -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:57.125   22:33:57 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x
00:06:57.125  ************************************
00:06:57.125  START TEST vfio_user_nvme_restart_vm
00:06:57.125  ************************************
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/vfio_user_restart_vm.sh
00:06:57.125  * Looking for test storage...
00:06:57.125  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme
00:06:57.125    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:06:57.125     22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1711 -- # lcov --version
00:06:57.125     22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:06:57.125    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:06:57.125    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:57.125    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:57.125    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:57.125    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@336 -- # IFS=.-:
00:06:57.125    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@336 -- # read -ra ver1
00:06:57.125    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@337 -- # IFS=.-:
00:06:57.125    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@337 -- # read -ra ver2
00:06:57.125    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@338 -- # local 'op=<'
00:06:57.125    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@340 -- # ver1_l=2
00:06:57.125    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@341 -- # ver2_l=1
00:06:57.125    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:57.125    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@344 -- # case "$op" in
00:06:57.125    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@345 -- # : 1
00:06:57.125    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:57.125    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:57.125     22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@365 -- # decimal 1
00:06:57.125     22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@353 -- # local d=1
00:06:57.125     22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:57.125     22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@355 -- # echo 1
00:06:57.125    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@365 -- # ver1[v]=1
00:06:57.125     22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@366 -- # decimal 2
00:06:57.125     22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@353 -- # local d=2
00:06:57.125     22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:57.125     22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@355 -- # echo 2
00:06:57.125    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@366 -- # ver2[v]=2
00:06:57.125    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:57.125    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:57.125    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@368 -- # return 0
00:06:57.125    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:57.125    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:06:57.125  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:57.125  		--rc genhtml_branch_coverage=1
00:06:57.125  		--rc genhtml_function_coverage=1
00:06:57.125  		--rc genhtml_legend=1
00:06:57.125  		--rc geninfo_all_blocks=1
00:06:57.125  		--rc geninfo_unexecuted_blocks=1
00:06:57.125  		
00:06:57.125  		'
00:06:57.125    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:06:57.125  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:57.125  		--rc genhtml_branch_coverage=1
00:06:57.125  		--rc genhtml_function_coverage=1
00:06:57.125  		--rc genhtml_legend=1
00:06:57.125  		--rc geninfo_all_blocks=1
00:06:57.125  		--rc geninfo_unexecuted_blocks=1
00:06:57.125  		
00:06:57.125  		'
00:06:57.125    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:06:57.125  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:57.125  		--rc genhtml_branch_coverage=1
00:06:57.125  		--rc genhtml_function_coverage=1
00:06:57.125  		--rc genhtml_legend=1
00:06:57.125  		--rc geninfo_all_blocks=1
00:06:57.125  		--rc geninfo_unexecuted_blocks=1
00:06:57.125  		
00:06:57.125  		'
00:06:57.125    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:06:57.125  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:57.125  		--rc genhtml_branch_coverage=1
00:06:57.125  		--rc genhtml_function_coverage=1
00:06:57.125  		--rc genhtml_legend=1
00:06:57.125  		--rc geninfo_all_blocks=1
00:06:57.125  		--rc geninfo_unexecuted_blocks=1
00:06:57.125  		
00:06:57.125  		'
00:06:57.125   22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh
00:06:57.126    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/common.sh@6 -- # : 128
00:06:57.126    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/common.sh@7 -- # : 512
00:06:57.126    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:06:57.126     22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@6 -- # : false
00:06:57.126     22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@7 -- # : /root/vhost_test
00:06:57.126     22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@8 -- # : /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:06:57.126     22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@9 -- # : qemu-img
00:06:57.126      22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:06:57.126     22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:06:57.126     22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:06:57.126     22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:06:57.126     22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:06:57.126     22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:06:57.126     22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:06:57.126       22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/vfio_user_restart_vm.sh
00:06:57.126      22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme
00:06:57.126     22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme
00:06:57.126     22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:06:57.126     22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:06:57.126     22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:06:57.126     22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:06:57.126     22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:06:57.126      22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:06:57.126      22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@2 -- # vhost_0_main_core=0
00:06:57.126      22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:06:57.126      22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:06:57.126      22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:06:57.126      22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:06:57.126      22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:06:57.126      22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:06:57.126      22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:06:57.126      22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:06:57.126      22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:06:57.126      22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:06:57.126      22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:06:57.126      22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:06:57.126      22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:06:57.126      22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:06:57.126      22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:06:57.126      22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:06:57.126      22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:06:57.126      22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:06:57.126      22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:06:57.126      22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:06:57.126      22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:06:57.126      22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:06:57.126      22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:06:57.126      22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:06:57.126     22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:06:57.126      22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:06:57.126      22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:06:57.126      22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:06:57.126      22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:06:57.126      22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:06:57.126      22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:06:57.126       22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:06:57.126        22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/cgroups.sh@244 -- # check_cgroup
00:06:57.126        22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:06:57.126        22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:06:57.126        22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/cgroups.sh@10 -- # echo 2
00:06:57.126       22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:06:57.126    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:06:57.126    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]]
00:06:57.126    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:06:57.126   22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/common.sh
00:06:57.126   22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/autotest.config
00:06:57.126    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@1 -- # vhost_0_reactor_mask='[0-3]'
00:06:57.126    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@2 -- # vhost_0_main_core=0
00:06:57.126    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@4 -- # VM_0_qemu_mask=4-5
00:06:57.126    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:06:57.126    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@7 -- # VM_1_qemu_mask=6-7
00:06:57.126    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:06:57.126    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@10 -- # VM_2_qemu_mask=8-9
00:06:57.126    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:06:57.126   22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@13 -- # bdfs=($(get_nvme_bdfs))
00:06:57.126    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@13 -- # get_nvme_bdfs
00:06:57.126    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1498 -- # bdfs=()
00:06:57.126    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1498 -- # local bdfs
00:06:57.126    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:06:57.127     22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/gen_nvme.sh
00:06:57.127     22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:06:57.127    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:06:57.127    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0d:00.0
00:06:57.127    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@14 -- # get_vhost_dir 0
00:06:57.127    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:06:57.127    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:06:57.127    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:06:57.127   22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@14 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:06:57.127   22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@16 -- # trap clean_vfio_user EXIT
00:06:57.127   22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@18 -- # vhosttestinit
00:06:57.127   22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@37 -- # '[' '' == iso ']'
00:06:57.127   22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]]
00:06:57.127   22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:06:57.127   22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:06:57.127   22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@20 -- # vfio_user_run 0
00:06:57.127   22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@11 -- # local vhost_name=0
00:06:57.127   22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@12 -- # local vfio_user_dir nvmf_pid_file rpc_py
00:06:57.127    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@14 -- # get_vhost_dir 0
00:06:57.127    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:06:57.127    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:06:57.127    22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:06:57.127   22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@14 -- # vfio_user_dir=/root/vhost_test/vhost/0
00:06:57.127   22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@15 -- # nvmf_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:06:57.127   22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@16 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:06:57.127   22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@18 -- # mkdir -p /root/vhost_test/vhost/0
00:06:57.387   22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@20 -- # timing_enter vfio_user_start
00:06:57.387   22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@726 -- # xtrace_disable
00:06:57.387   22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:06:57.387   22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/nvmf_tgt -r /root/vhost_test/vhost/0/rpc.sock -m 0xf -s 512
00:06:57.387   22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@22 -- # nvmfpid=65532
00:06:57.387   22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@23 -- # echo 65532
00:06:57.387   22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@25 -- # echo 'Process pid: 65532'
00:06:57.387  Process pid: 65532
00:06:57.387   22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@26 -- # echo 'waiting for app to run...'
00:06:57.387  waiting for app to run...
00:06:57.387   22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@27 -- # waitforlisten 65532 /root/vhost_test/vhost/0/rpc.sock
00:06:57.387   22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@835 -- # '[' -z 65532 ']'
00:06:57.387   22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@839 -- # local rpc_addr=/root/vhost_test/vhost/0/rpc.sock
00:06:57.387   22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:57.387   22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...'
00:06:57.387  Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...
00:06:57.387   22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:57.387   22:33:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:06:57.387  [2024-12-10 22:33:57.998101] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:06:57.387  [2024-12-10 22:33:57.998224] [ DPDK EAL parameters: nvmf --no-shconf -c 0xf -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65532 ]
00:06:57.387  EAL: No free 2048 kB hugepages reported on node 1
00:06:57.647  [2024-12-10 22:33:58.262611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:06:57.647  [2024-12-10 22:33:58.397795] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:06:57.647  [2024-12-10 22:33:58.397818] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:06:57.647  [2024-12-10 22:33:58.397870] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:06:57.647  [2024-12-10 22:33:58.397874] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:06:58.215   22:33:58 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:58.215   22:33:58 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@868 -- # return 0
00:06:58.215   22:33:58 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@29 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_create_transport -t VFIOUSER
00:06:58.473   22:33:59 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@30 -- # timing_exit vfio_user_start
00:06:58.473   22:33:59 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@732 -- # xtrace_disable
00:06:58.473   22:33:59 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:06:58.473   22:33:59 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@22 -- # vm_muser_dir=/root/vhost_test/vms/1/muser
00:06:58.473   22:33:59 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@23 -- # rm -rf /root/vhost_test/vms/1/muser
00:06:58.473   22:33:59 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@24 -- # mkdir -p /root/vhost_test/vms/1/muser/domain/muser1/1
00:06:58.474   22:33:59 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@26 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_attach_controller -b Nvme0 -t pcie -a 0000:0d:00.0
00:07:01.758  Nvme0n1
00:07:01.758   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@27 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -s SPDK001 -a
00:07:01.758   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@28 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Nvme0n1
00:07:02.018   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@29 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /root/vhost_test/vms/1/muser/domain/muser1/1 -s 0
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@31 -- # vm_setup --disk-type=vfio_user --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@518 -- # xtrace_disable
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:07:02.278  WARN: removing existing VM in '/root/vhost_test/vms/1'
00:07:02.278  INFO: Creating new VM in /root/vhost_test/vms/1
00:07:02.278  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:07:02.278  INFO: TASK MASK: 6-7
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@671 -- # local node_num=0
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@672 -- # local boot_disk_present=false
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:07:02.278  INFO: NUMA NODE: 0
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@677 -- # [[ -n '' ]]
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@686 -- # [[ -z '' ]]
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@701 -- # IFS=,
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@701 -- # read -r disk disk_type _
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@702 -- # [[ -z '' ]]
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@702 -- # disk_type=vfio_user
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@704 -- # case $disk_type in
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@758 -- # notice 'using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:07:02.278  INFO: using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@759 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/$vm_num/muser/domain/muser$disk/$disk/cntrl")
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@760 -- # [[ 1 == '' ]]
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@780 -- # [[ -n '' ]]
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@785 -- # (( 0 ))
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:07:02.278  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:07:02.278   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@787 -- # cat
00:07:02.279    22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/1/muser/domain/muser1/1/cntrl
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@827 -- # echo 10100
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@828 -- # echo 10101
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@829 -- # echo 10102
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@832 -- # [[ -z '' ]]
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@834 -- # echo 10104
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@835 -- # echo 101
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@837 -- # [[ -z '' ]]
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@838 -- # [[ -z '' ]]
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@32 -- # vm_run 1
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@843 -- # local run_all=false
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@844 -- # local vms_to_run=
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@846 -- # getopts a-: optchar
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@856 -- # false
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@859 -- # shift 0
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@860 -- # for vm in "$@"
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@861 -- # vm_num_is_valid 1
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@871 -- # vm_is_running 1
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@373 -- # return 1
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:07:02.279  INFO: running /root/vhost_test/vms/1/run.sh
00:07:02.279   22:34:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:07:02.279  Running VM in /root/vhost_test/vms/1
00:07:02.847  Waiting for QEMU pid file
00:07:03.107  [2024-12-10 22:34:03.671779] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: enabling controller
00:07:03.675  === qemu.log ===
00:07:03.675  === qemu.log ===
00:07:03.675   22:34:04 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@33 -- # vm_wait_for_boot 60 1
00:07:03.675   22:34:04 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@913 -- # assert_number 60
00:07:03.675   22:34:04 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:07:03.675   22:34:04 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@281 -- # return 0
00:07:03.675   22:34:04 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@915 -- # xtrace_disable
00:07:03.675   22:34:04 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:07:03.675  INFO: Waiting for VMs to boot
00:07:03.675  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:07:18.563  [2024-12-10 22:34:17.684198] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:07:18.563  [2024-12-10 22:34:17.693240] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:07:18.563  [2024-12-10 22:34:17.697273] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: enabling controller
00:07:26.682  
00:07:26.682  INFO: VM1 ready
00:07:26.682  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:07:26.682  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:07:27.620  INFO: all VMs ready
00:07:27.620   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@973 -- # return 0
00:07:27.620   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@35 -- # vm_exec 1 lsblk
00:07:27.620   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:07:27.620   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:27.620   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:07:27.620   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:07:27.620   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@339 -- # shift
00:07:27.620    22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:07:27.620    22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:07:27.620    22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:27.620    22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:07:27.620    22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:07:27.620    22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:07:27.620   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 lsblk
00:07:27.620  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:07:27.620  NAME    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
00:07:27.620  sda       8:0    0     5G  0 disk 
00:07:27.620  ├─sda1    8:1    0     1M  0 part 
00:07:27.620  ├─sda2    8:2    0  1000M  0 part /boot
00:07:27.620  ├─sda3    8:3    0   100M  0 part /boot/efi
00:07:27.620  ├─sda4    8:4    0     4M  0 part 
00:07:27.620  └─sda5    8:5    0   3.9G  0 part /home
00:07:27.620                                    /
00:07:27.620  zram0   252:0    0   946M  0 disk [SWAP]
00:07:27.620  nvme0n1 259:1    0 931.5G  0 disk 
00:07:27.620   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@37 -- # vm_shutdown_all
00:07:27.620   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:07:27.620   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:07:27.620    22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@489 -- # vm_list_all
00:07:27.620    22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@466 -- # vms=()
00:07:27.620    22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@466 -- # local vms
00:07:27.620    22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:07:27.620    22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:07:27.620    22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:07:27.620   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:07:27.620   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@492 -- # vm_shutdown 1
00:07:27.620   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:07:27.620   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:27.620   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:07:27.620   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:07:27.620   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:07:27.620   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@424 -- # vm_is_running 1
00:07:27.620   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:07:27.620   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:27.620   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:07:27.620   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:07:27.879   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:07:27.879   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:07:27.879    22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:07:27.879   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # vm_pid=66419
00:07:27.879   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 66419
00:07:27.879   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@380 -- # return 0
00:07:27.879   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:07:27.879   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:07:27.879   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:07:27.879   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:07:27.879   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:07:27.879   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:27.879   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:07:27.879   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:07:27.879  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:07:27.879   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@432 -- # set +e
00:07:27.879   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:07:27.879   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:07:27.879   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:27.879   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:07:27.879   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:07:27.879   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@339 -- # shift
00:07:27.879    22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:07:27.879    22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:07:27.879    22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:27.879    22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:07:27.879    22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:07:27.879    22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:07:27.879   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:07:27.879  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:07:27.879   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:07:27.879   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:07:27.879   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:07:27.879   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:07:27.879   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:07:27.879   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:27.879   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:07:27.879   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:07:27.879  INFO: VM1 is shutting down - wait a while to complete
00:07:27.879   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@435 -- # set -e
00:07:27.879   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:07:27.879   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:07:28.138   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:07:28.138   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:07:28.138   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:07:28.138   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:28.138   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:07:28.138   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:07:28.138  INFO: Waiting for VMs to shutdown...
00:07:28.138   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:07:28.138   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:07:28.138   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:07:28.138   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:07:28.138   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:28.138   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:07:28.138   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:07:28.138   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:07:28.138   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:07:28.138    22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:07:28.138   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # vm_pid=66419
00:07:28.138   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 66419
00:07:28.138   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@380 -- # return 0
00:07:28.138   22:34:28 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:07:29.076   22:34:29 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:07:29.076   22:34:29 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:07:29.076   22:34:29 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:07:29.076   22:34:29 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:07:29.076   22:34:29 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:29.076   22:34:29 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:07:29.076   22:34:29 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:07:29.076   22:34:29 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:07:29.076   22:34:29 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:07:29.076    22:34:29 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:07:29.076   22:34:29 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # vm_pid=66419
00:07:29.076   22:34:29 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 66419
00:07:29.076   22:34:29 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@380 -- # return 0
00:07:29.076   22:34:29 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:07:29.076  [2024-12-10 22:34:29.838281] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:07:30.013   22:34:30 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:07:30.013   22:34:30 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:07:30.013   22:34:30 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:07:30.013   22:34:30 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:07:30.014   22:34:30 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:30.014   22:34:30 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:07:30.014   22:34:30 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:07:30.014   22:34:30 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:07:30.014   22:34:30 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@373 -- # return 1
00:07:30.014   22:34:30 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:07:30.014   22:34:30 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:07:30.952   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:07:30.952   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:07:30.952   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:07:30.952   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:07:30.952   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:07:30.952   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:07:30.952   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:07:30.952   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:30.952   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:07:30.952   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:07:30.952  INFO: All VMs successfully shut down
00:07:30.952   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@505 -- # return 0
00:07:30.952   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@40 -- # vm_setup --disk-type=vfio_user --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1
00:07:30.952   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@518 -- # xtrace_disable
00:07:30.952   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:07:30.952  WARN: removing existing VM in '/root/vhost_test/vms/1'
00:07:30.952  INFO: Creating new VM in /root/vhost_test/vms/1
00:07:30.952  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:07:30.952  INFO: TASK MASK: 6-7
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@671 -- # local node_num=0
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@672 -- # local boot_disk_present=false
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:07:31.211  INFO: NUMA NODE: 0
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@677 -- # [[ -n '' ]]
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@686 -- # [[ -z '' ]]
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@701 -- # IFS=,
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@701 -- # read -r disk disk_type _
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@702 -- # [[ -z '' ]]
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@702 -- # disk_type=vfio_user
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@704 -- # case $disk_type in
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@758 -- # notice 'using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:07:31.211  INFO: using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@759 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/$vm_num/muser/domain/muser$disk/$disk/cntrl")
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@760 -- # [[ 1 == '' ]]
00:07:31.211   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@780 -- # [[ -n '' ]]
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@785 -- # (( 0 ))
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:07:31.212  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@787 -- # cat
00:07:31.212    22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/1/muser/domain/muser1/1/cntrl
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@827 -- # echo 10100
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@828 -- # echo 10101
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@829 -- # echo 10102
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@832 -- # [[ -z '' ]]
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@834 -- # echo 10104
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@835 -- # echo 101
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@837 -- # [[ -z '' ]]
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@838 -- # [[ -z '' ]]
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@41 -- # vm_run 1
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@843 -- # local run_all=false
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@844 -- # local vms_to_run=
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@846 -- # getopts a-: optchar
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@856 -- # false
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@859 -- # shift 0
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@860 -- # for vm in "$@"
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@861 -- # vm_num_is_valid 1
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@871 -- # vm_is_running 1
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@373 -- # return 1
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:07:31.212  INFO: running /root/vhost_test/vms/1/run.sh
00:07:31.212   22:34:31 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:07:31.212  Running VM in /root/vhost_test/vms/1
00:07:31.471  Waiting for QEMU pid file
00:07:31.731  [2024-12-10 22:34:32.495678] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: enabling controller
00:07:32.669  === qemu.log ===
00:07:32.669  === qemu.log ===
00:07:32.669   22:34:33 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@42 -- # vm_wait_for_boot 60 1
00:07:32.669   22:34:33 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@913 -- # assert_number 60
00:07:32.669   22:34:33 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:07:32.669   22:34:33 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@281 -- # return 0
00:07:32.669   22:34:33 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@915 -- # xtrace_disable
00:07:32.669   22:34:33 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:07:32.669  INFO: Waiting for VMs to boot
00:07:32.669  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:07:47.553  [2024-12-10 22:34:46.648387] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:07:47.553  [2024-12-10 22:34:46.657442] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:07:47.553  [2024-12-10 22:34:46.661467] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: enabling controller
00:07:54.122  
00:07:54.122  INFO: VM1 ready
00:07:54.122  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:07:54.122  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:07:54.690  INFO: all VMs ready
00:07:54.690   22:34:55 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@973 -- # return 0
00:07:54.690   22:34:55 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@44 -- # vm_exec 1 lsblk
00:07:54.690   22:34:55 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:07:54.690   22:34:55 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:54.690   22:34:55 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:07:54.690   22:34:55 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:07:54.690   22:34:55 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@339 -- # shift
00:07:54.690    22:34:55 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:07:54.690    22:34:55 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:07:54.690    22:34:55 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:54.690    22:34:55 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:07:54.690    22:34:55 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:07:54.690    22:34:55 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:07:54.690   22:34:55 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 lsblk
00:07:54.690  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:07:54.949  NAME    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
00:07:54.949  sda       8:0    0     5G  0 disk 
00:07:54.949  ├─sda1    8:1    0     1M  0 part 
00:07:54.949  ├─sda2    8:2    0  1000M  0 part /boot
00:07:54.949  ├─sda3    8:3    0   100M  0 part /boot/efi
00:07:54.949  ├─sda4    8:4    0     4M  0 part 
00:07:54.949  └─sda5    8:5    0   3.9G  0 part /home
00:07:54.949                                    /
00:07:54.949  zram0   252:0    0   946M  0 disk [SWAP]
00:07:54.949  nvme0n1 259:1    0 931.5G  0 disk 
00:07:54.949   22:34:55 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@47 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_remove_ns nqn.2019-07.io.spdk:cnode1 1
00:07:55.208   22:34:55 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@49 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_remove_listener nqn.2019-07.io.spdk:cnode1 -t vfiouser -a /root/vhost_test/vms/1/muser/domain/muser1/1 -s 0
00:07:55.467   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@53 -- # vm_exec 1 'echo 1 > /sys/class/nvme/nvme0/device/remove'
00:07:55.467   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:07:55.467   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:55.467   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:07:55.467   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:07:55.467   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@339 -- # shift
00:07:55.467    22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:07:55.467    22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:07:55.467    22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:55.467    22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:07:55.467    22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:07:55.467    22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:07:55.467   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'echo 1 > /sys/class/nvme/nvme0/device/remove'
00:07:55.467  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:07:55.728   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@55 -- # vm_shutdown_all
00:07:55.728   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:07:55.728   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:07:55.728    22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@489 -- # vm_list_all
00:07:55.728    22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@466 -- # vms=()
00:07:55.728    22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@466 -- # local vms
00:07:55.728    22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:07:55.728    22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:07:55.728    22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:07:55.728   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:07:55.728   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@492 -- # vm_shutdown 1
00:07:55.728   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:07:55.728   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:55.728   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:07:55.728   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:07:55.728   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:07:55.728   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@424 -- # vm_is_running 1
00:07:55.728   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:07:55.728   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:55.728   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:07:55.728   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:07:55.728   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:07:55.728   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:07:55.728    22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:07:55.728   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # vm_pid=71423
00:07:55.728   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 71423
00:07:55.728   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@380 -- # return 0
00:07:55.728   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:07:55.728   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:07:55.728   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:07:55.728   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:07:55.728   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:07:55.728   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:55.728   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:07:55.728   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:07:55.728  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:07:55.728   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@432 -- # set +e
00:07:55.728   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:07:55.728   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:07:55.728   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:55.728   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:07:55.728   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:07:55.728   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@339 -- # shift
00:07:55.728    22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:07:55.728    22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:07:55.728    22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:55.728    22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:07:55.728    22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:07:55.728    22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:07:55.728   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:07:55.728  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:07:55.987   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:07:55.987   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:07:55.987   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:07:55.987   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:07:55.987   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:07:55.987   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:55.987   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:07:55.987   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:07:55.987  INFO: VM1 is shutting down - wait a while to complete
00:07:55.987   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@435 -- # set -e
00:07:55.987   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:07:55.987   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:07:55.987   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:07:55.987   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:07:55.987   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:07:55.987   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:55.987   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:07:55.987   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:07:55.987  INFO: Waiting for VMs to shutdown...
00:07:55.987   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:07:55.987   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:07:55.987   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:07:55.987   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:07:55.987   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:55.987   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:07:55.987   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:07:55.987   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:07:55.987   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:07:55.987    22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:07:55.987   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # vm_pid=71423
00:07:55.987   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 71423
00:07:55.987   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@380 -- # return 0
00:07:55.987   22:34:56 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:07:56.923   22:34:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:07:56.923   22:34:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:07:56.923   22:34:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:07:56.923   22:34:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:07:56.923   22:34:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:56.923   22:34:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:07:56.923   22:34:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:07:56.923   22:34:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:07:56.923   22:34:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:07:56.923    22:34:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:07:56.923   22:34:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # vm_pid=71423
00:07:56.923   22:34:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 71423
00:07:56.923   22:34:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@380 -- # return 0
00:07:56.923   22:34:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:07:58.300   22:34:58 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:07:58.300   22:34:58 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:07:58.300   22:34:58 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:07:58.300   22:34:58 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:07:58.300   22:34:58 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:58.300   22:34:58 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:07:58.300   22:34:58 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:07:58.300   22:34:58 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:07:58.300   22:34:58 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@373 -- # return 1
00:07:58.300   22:34:58 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:07:58.300   22:34:58 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:07:59.238   22:34:59 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:07:59.238   22:34:59 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:07:59.238   22:34:59 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:07:59.238   22:34:59 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:07:59.238   22:34:59 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:07:59.238   22:34:59 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:07:59.238   22:34:59 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:07:59.238   22:34:59 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:59.238   22:34:59 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:07:59.238   22:34:59 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:07:59.238  INFO: All VMs successfully shut down
00:07:59.238   22:34:59 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@505 -- # return 0
00:07:59.238   22:34:59 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@57 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_detach_controller Nvme0
00:08:00.614   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@58 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_delete_subsystem nqn.2019-07.io.spdk:cnode1
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@60 -- # vhosttestfini
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@54 -- # '[' '' == iso ']'
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@1 -- # clean_vfio_user
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@6 -- # vm_kill_all
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@476 -- # local vm
00:08:00.873    22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@477 -- # vm_list_all
00:08:00.873    22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@466 -- # vms=()
00:08:00.873    22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@466 -- # local vms
00:08:00.873    22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:08:00.873    22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:08:00.873    22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@477 -- # for vm in $(vm_list_all)
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@478 -- # vm_kill 1
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@442 -- # vm_num_is_valid 1
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@443 -- # local vm_dir=/root/vhost_test/vms/1
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@445 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@446 -- # return 0
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@481 -- # rm -rf /root/vhost_test/vms
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@7 -- # vhost_kill 0
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@202 -- # local rc=0
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@203 -- # local vhost_name=0
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@205 -- # [[ -z 0 ]]
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@210 -- # local vhost_dir
00:08:00.873    22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@211 -- # get_vhost_dir 0
00:08:00.873    22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:08:00.873    22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:08:00.873    22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]]
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@219 -- # timing_enter vhost_kill
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@220 -- # local vhost_pid
00:08:00.873    22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@221 -- # cat /root/vhost_test/vhost/0/vhost.pid
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@221 -- # vhost_pid=65532
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@222 -- # notice 'killing vhost (PID 65532) app'
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'killing vhost (PID 65532) app'
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: killing vhost (PID 65532) app'
00:08:00.873  INFO: killing vhost (PID 65532) app
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@224 -- # kill -INT 65532
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@225 -- # notice 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: sent SIGINT to vhost app - waiting 60 seconds to exit'
00:08:00.873  INFO: sent SIGINT to vhost app - waiting 60 seconds to exit
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@226 -- # (( i = 0 ))
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@227 -- # kill -0 65532
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@228 -- # echo .
00:08:00.873  .
00:08:00.873   22:35:01 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@229 -- # sleep 1
00:08:01.809   22:35:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@226 -- # (( i++ ))
00:08:01.809   22:35:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:08:01.809   22:35:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@227 -- # kill -0 65532
00:08:01.809  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 227: kill: (65532) - No such process
00:08:01.809   22:35:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@231 -- # break
00:08:01.809   22:35:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@234 -- # kill -0 65532
00:08:01.809  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 234: kill: (65532) - No such process
00:08:01.809   22:35:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@239 -- # kill -0 65532
00:08:01.809  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 239: kill: (65532) - No such process
00:08:01.809   22:35:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@245 -- # is_pid_child 65532
00:08:01.809   22:35:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1686 -- # local pid=65532 _pid
00:08:01.809    22:35:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1685 -- # jobs -pr
00:08:01.809   22:35:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1688 -- # read -r _pid
00:08:01.809   22:35:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1689 -- # (( pid == _pid ))
00:08:01.809   22:35:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1688 -- # read -r _pid
00:08:01.809   22:35:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1692 -- # return 1
00:08:01.809   22:35:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@257 -- # timing_exit vhost_kill
00:08:01.809   22:35:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@732 -- # xtrace_disable
00:08:01.809   22:35:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:08:01.809   22:35:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@259 -- # rm -rf /root/vhost_test/vhost/0
00:08:01.809   22:35:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@261 -- # return 0
00:08:01.809  
00:08:01.809  real	1m4.875s
00:08:01.809  user	4m14.280s
00:08:01.809  sys	0m1.926s
00:08:01.809   22:35:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:01.809   22:35:02 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:08:01.809  ************************************
00:08:01.809  END TEST vfio_user_nvme_restart_vm
00:08:01.809  ************************************
00:08:01.809   22:35:02 vfio_user_qemu -- vfio_user/vfio_user.sh@17 -- # run_test vfio_user_virtio_blk_restart_vm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_restart_vm.sh virtio_blk
00:08:01.809   22:35:02 vfio_user_qemu -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:08:01.809   22:35:02 vfio_user_qemu -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:01.809   22:35:02 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x
00:08:02.068  ************************************
00:08:02.068  START TEST vfio_user_virtio_blk_restart_vm
00:08:02.068  ************************************
00:08:02.068   22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_restart_vm.sh virtio_blk
00:08:02.068  * Looking for test storage...
00:08:02.068  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:08:02.068    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:08:02.068     22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1711 -- # lcov --version
00:08:02.068     22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:08:02.068    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:08:02.068    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:02.068    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:02.068    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:02.068    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@336 -- # IFS=.-:
00:08:02.068    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@336 -- # read -ra ver1
00:08:02.068    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@337 -- # IFS=.-:
00:08:02.068    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@337 -- # read -ra ver2
00:08:02.068    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@338 -- # local 'op=<'
00:08:02.068    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@340 -- # ver1_l=2
00:08:02.068    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@341 -- # ver2_l=1
00:08:02.068    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:02.068    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@344 -- # case "$op" in
00:08:02.068    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@345 -- # : 1
00:08:02.068    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:02.068    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:02.068     22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@365 -- # decimal 1
00:08:02.068     22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@353 -- # local d=1
00:08:02.068     22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:02.068     22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@355 -- # echo 1
00:08:02.069    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@365 -- # ver1[v]=1
00:08:02.069     22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@366 -- # decimal 2
00:08:02.069     22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@353 -- # local d=2
00:08:02.069     22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:02.069     22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@355 -- # echo 2
00:08:02.069    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@366 -- # ver2[v]=2
00:08:02.069    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:02.069    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:02.069    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@368 -- # return 0
00:08:02.069    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:02.069    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:08:02.069  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:02.069  		--rc genhtml_branch_coverage=1
00:08:02.069  		--rc genhtml_function_coverage=1
00:08:02.069  		--rc genhtml_legend=1
00:08:02.069  		--rc geninfo_all_blocks=1
00:08:02.069  		--rc geninfo_unexecuted_blocks=1
00:08:02.069  		
00:08:02.069  		'
00:08:02.069    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:08:02.069  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:02.069  		--rc genhtml_branch_coverage=1
00:08:02.069  		--rc genhtml_function_coverage=1
00:08:02.069  		--rc genhtml_legend=1
00:08:02.069  		--rc geninfo_all_blocks=1
00:08:02.069  		--rc geninfo_unexecuted_blocks=1
00:08:02.069  		
00:08:02.069  		'
00:08:02.069    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:08:02.069  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:02.069  		--rc genhtml_branch_coverage=1
00:08:02.069  		--rc genhtml_function_coverage=1
00:08:02.069  		--rc genhtml_legend=1
00:08:02.069  		--rc geninfo_all_blocks=1
00:08:02.069  		--rc geninfo_unexecuted_blocks=1
00:08:02.069  		
00:08:02.069  		'
00:08:02.069    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:08:02.069  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:02.069  		--rc genhtml_branch_coverage=1
00:08:02.069  		--rc genhtml_function_coverage=1
00:08:02.069  		--rc genhtml_legend=1
00:08:02.069  		--rc geninfo_all_blocks=1
00:08:02.069  		--rc geninfo_unexecuted_blocks=1
00:08:02.069  		
00:08:02.069  		'
00:08:02.069   22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh
00:08:02.069    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/common.sh@6 -- # : 128
00:08:02.069    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/common.sh@7 -- # : 512
00:08:02.069    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:08:02.069     22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@6 -- # : false
00:08:02.069     22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@7 -- # : /root/vhost_test
00:08:02.069     22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@8 -- # : /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:08:02.069     22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@9 -- # : qemu-img
00:08:02.069      22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:08:02.069     22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:08:02.069     22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:08:02.069     22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:08:02.069     22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:08:02.069     22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:08:02.069     22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:08:02.069       22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_restart_vm.sh
00:08:02.069      22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:08:02.069     22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:08:02.069     22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:08:02.069     22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:08:02.069     22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:08:02.069     22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:08:02.069     22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:08:02.069      22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:08:02.069      22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@2 -- # vhost_0_main_core=0
00:08:02.069      22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:08:02.069      22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:08:02.069      22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:08:02.069      22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:08:02.069      22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:08:02.069      22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:08:02.069      22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:08:02.069      22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:08:02.069      22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:08:02.069      22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:08:02.069      22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:08:02.069      22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:08:02.069      22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:08:02.069      22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:08:02.069      22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:08:02.069      22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:08:02.069      22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:08:02.069      22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:08:02.069      22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:08:02.069      22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:08:02.069      22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:08:02.069      22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:08:02.069      22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:08:02.069      22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:08:02.069     22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:08:02.069      22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:08:02.069      22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:08:02.069      22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:08:02.069      22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:08:02.070      22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:08:02.070      22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:08:02.070       22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:08:02.070        22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/cgroups.sh@244 -- # check_cgroup
00:08:02.070        22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:08:02.070        22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:08:02.070        22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/cgroups.sh@10 -- # echo 2
00:08:02.070       22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:08:02.070    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:08:02.070    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]]
00:08:02.070    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:08:02.070   22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/common.sh
00:08:02.070   22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@12 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/autotest.config
00:08:02.070    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@1 -- # vhost_0_reactor_mask='[0-3]'
00:08:02.070    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@2 -- # vhost_0_main_core=0
00:08:02.070    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@4 -- # VM_0_qemu_mask=4-5
00:08:02.070    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:08:02.070    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@7 -- # VM_1_qemu_mask=6-7
00:08:02.070    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:08:02.070    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@10 -- # VM_2_qemu_mask=8-9
00:08:02.070    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:08:02.070   22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@14 -- # bdfs=($(get_nvme_bdfs))
00:08:02.070    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@14 -- # get_nvme_bdfs
00:08:02.070    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1498 -- # bdfs=()
00:08:02.070    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1498 -- # local bdfs
00:08:02.070    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:08:02.070     22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/gen_nvme.sh
00:08:02.070     22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:08:02.070    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:08:02.070    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0d:00.0
00:08:02.070    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@15 -- # get_vhost_dir 0
00:08:02.070    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:08:02.070    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:08:02.070    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:08:02.070   22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@15 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:08:02.070   22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@17 -- # virtio_type=virtio_blk
00:08:02.070   22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@18 -- # [[ virtio_blk != virtio_blk ]]
00:08:02.070   22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@31 -- # vhosttestinit
00:08:02.070   22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@37 -- # '[' '' == iso ']'
00:08:02.070   22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]]
00:08:02.070   22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:08:02.070   22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:08:02.070   22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@33 -- # vfu_tgt_run 0
00:08:02.070   22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@6 -- # local vhost_name=0
00:08:02.070   22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@7 -- # local vfio_user_dir vfu_pid_file rpc_py
00:08:02.070    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@9 -- # get_vhost_dir 0
00:08:02.070    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:08:02.070    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:08:02.070    22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:08:02.070   22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@9 -- # vfio_user_dir=/root/vhost_test/vhost/0
00:08:02.070   22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@10 -- # vfu_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:08:02.070   22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@11 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:08:02.070   22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@13 -- # mkdir -p /root/vhost_test/vhost/0
00:08:02.070   22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@15 -- # timing_enter vfu_tgt_start
00:08:02.070   22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:02.070   22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:08:02.070   22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@16 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -r /root/vhost_test/vhost/0/rpc.sock -m 0xf -s 512
00:08:02.070   22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@17 -- # vfupid=77147
00:08:02.070   22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@18 -- # echo 77147
00:08:02.070   22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@20 -- # echo 'Process pid: 77147'
00:08:02.070  Process pid: 77147
00:08:02.070   22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@21 -- # echo 'waiting for app to run...'
00:08:02.070  waiting for app to run...
00:08:02.070   22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@22 -- # waitforlisten 77147 /root/vhost_test/vhost/0/rpc.sock
00:08:02.070   22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@835 -- # '[' -z 77147 ']'
00:08:02.070   22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@839 -- # local rpc_addr=/root/vhost_test/vhost/0/rpc.sock
00:08:02.070   22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:02.070   22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...'
00:08:02.070  Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...
00:08:02.070   22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:02.070   22:35:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:08:02.329  [2024-12-10 22:35:02.904417] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:08:02.329  [2024-12-10 22:35:02.904521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xf -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77147 ]
00:08:02.329  EAL: No free 2048 kB hugepages reported on node 1
00:08:02.588  [2024-12-10 22:35:03.196683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:08:02.588  [2024-12-10 22:35:03.340996] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:08:02.588  [2024-12-10 22:35:03.341051] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:08:02.588  [2024-12-10 22:35:03.341095] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:08:02.588  [2024-12-10 22:35:03.341103] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:08:03.524   22:35:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:03.524   22:35:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@868 -- # return 0
00:08:03.524   22:35:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@24 -- # timing_exit vfu_tgt_start
00:08:03.524   22:35:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@732 -- # xtrace_disable
00:08:03.524   22:35:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:08:03.524   22:35:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@35 -- # vfu_vm_dir=/root/vhost_test/vms/vfu_tgt
00:08:03.524   22:35:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@36 -- # rm -rf /root/vhost_test/vms/vfu_tgt
00:08:03.524   22:35:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@37 -- # mkdir -p /root/vhost_test/vms/vfu_tgt
00:08:03.525   22:35:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@39 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_attach_controller -b Nvme0 -t pcie -a 0000:0d:00.0
00:08:06.815  Nvme0n1
00:08:06.815   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@42 -- # disk_no=1
00:08:06.815   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@43 -- # vm_num=1
00:08:06.815   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@44 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_tgt_set_base_path /root/vhost_test/vms/vfu_tgt
00:08:07.073   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@46 -- # [[ virtio_blk == \v\i\r\t\i\o\_\b\l\k ]]
00:08:07.073   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@47 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_virtio_create_blk_endpoint virtio.1 --bdev-name Nvme0n1 --num-queues=2 --qsize=512 --packed-ring
00:08:07.073   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@53 -- # vm_setup --disk-type=vfio_user_virtio --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1
00:08:07.073   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@518 -- # xtrace_disable
00:08:07.073   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:08:07.073  INFO: Creating new VM in /root/vhost_test/vms/1
00:08:07.073  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:08:07.073  INFO: TASK MASK: 6-7
00:08:07.333   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@671 -- # local node_num=0
00:08:07.333   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@672 -- # local boot_disk_present=false
00:08:07.333   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:08:07.333   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:08:07.333   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:08:07.333   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:08:07.333   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:08:07.333   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:07.333   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:08:07.333   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:08:07.333  INFO: NUMA NODE: 0
00:08:07.333   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:08:07.333   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:08:07.333   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:08:07.333   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:08:07.333   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@677 -- # [[ -n '' ]]
00:08:07.333   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:08:07.333   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:08:07.333   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:08:07.333   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:08:07.333   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@686 -- # [[ -z '' ]]
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@701 -- # IFS=,
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@701 -- # read -r disk disk_type _
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@702 -- # [[ -z '' ]]
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@702 -- # disk_type=vfio_user_virtio
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@704 -- # case $disk_type in
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@766 -- # notice 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:08:07.334  INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@767 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/vfu_tgt/virtio.$disk")
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@768 -- # [[ 1 == '' ]]
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@780 -- # [[ -n '' ]]
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@785 -- # (( 0 ))
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:08:07.334  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@787 -- # cat
00:08:07.334    22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/vfu_tgt/virtio.1
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@827 -- # echo 10100
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@828 -- # echo 10101
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@829 -- # echo 10102
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@832 -- # [[ -z '' ]]
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@834 -- # echo 10104
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@835 -- # echo 101
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@837 -- # [[ -z '' ]]
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@838 -- # [[ -z '' ]]
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@54 -- # vm_run 1
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@843 -- # local run_all=false
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@844 -- # local vms_to_run=
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@846 -- # getopts a-: optchar
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@856 -- # false
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@859 -- # shift 0
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@860 -- # for vm in "$@"
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@861 -- # vm_num_is_valid 1
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@871 -- # vm_is_running 1
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@373 -- # return 1
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:08:07.334  INFO: running /root/vhost_test/vms/1/run.sh
00:08:07.334   22:35:07 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:08:07.334  Running VM in /root/vhost_test/vms/1
00:08:07.594  [2024-12-10 22:35:08.245305] tgt_endpoint.c: 167:tgt_accept_poller: *NOTICE*: /root/vhost_test/vms/vfu_tgt/virtio.1: attached successfully
00:08:07.594  Waiting for QEMU pid file
00:08:08.971  === qemu.log ===
00:08:08.971  === qemu.log ===
00:08:08.971   22:35:09 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@55 -- # vm_wait_for_boot 60 1
00:08:08.971   22:35:09 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@913 -- # assert_number 60
00:08:08.971   22:35:09 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:08:08.971   22:35:09 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@281 -- # return 0
00:08:08.971   22:35:09 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@915 -- # xtrace_disable
00:08:08.971   22:35:09 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:08:08.971  INFO: Waiting for VMs to boot
00:08:08.971  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:08:47.715  
00:08:47.715  INFO: VM1 ready
00:08:47.715  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:08:47.715  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:08:47.715  INFO: all VMs ready
00:08:47.715   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@973 -- # return 0
00:08:47.715   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@58 -- # fio_bin=--fio-bin=/usr/src/fio-static/fio
00:08:47.715   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@59 -- # fio_disks=
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@60 -- # qemu_mask_param=VM_1_qemu_mask
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@62 -- # host_name=VM-1-6-7
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@63 -- # vm_exec 1 'hostname VM-1-6-7'
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:08:47.716    22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:08:47.716    22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:08:47.716    22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:47.716    22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:08:47.716    22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:08:47.716    22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'hostname VM-1-6-7'
00:08:47.716  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@64 -- # vm_start_fio_server --fio-bin=/usr/src/fio-static/fio 1
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@977 -- # local OPTIND optchar
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@978 -- # local readonly=
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@979 -- # local fio_bin=
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@980 -- # getopts :-: optchar
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@981 -- # case "$optchar" in
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@983 -- # case "$OPTARG" in
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@984 -- # local fio_bin=/usr/src/fio-static/fio
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@980 -- # getopts :-: optchar
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@993 -- # shift 1
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@994 -- # for vm_num in "$@"
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@995 -- # notice 'Starting fio server on VM1'
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Starting fio server on VM1'
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Starting fio server on VM1'
00:08:47.716  INFO: Starting fio server on VM1
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@996 -- # [[ /usr/src/fio-static/fio != '' ]]
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@997 -- # vm_exec 1 'cat > /root/fio; chmod +x /root/fio'
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:08:47.716    22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:08:47.716    22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:08:47.716    22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:47.716    22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:08:47.716    22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:08:47.716    22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/fio; chmod +x /root/fio'
00:08:47.716  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@998 -- # vm_exec 1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:08:47.716    22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:08:47.716    22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:08:47.716    22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:47.716    22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:08:47.716    22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:08:47.716    22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:08:47.716  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@66 -- # disks_before_restart=
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@67 -- # get_disks virtio_blk 1
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@24 -- # [[ virtio_blk == \v\i\r\t\i\o\_\s\c\s\i ]]
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@26 -- # [[ virtio_blk == \v\i\r\t\i\o\_\b\l\k ]]
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@27 -- # vm_check_blk_location 1
00:08:47.716   22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1035 -- # local 'script=shopt -s nullglob; cd /sys/block; echo vd*'
00:08:47.716    22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1036 -- # echo 'shopt -s nullglob; cd /sys/block; echo vd*'
00:08:47.716    22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1036 -- # vm_exec 1 bash -s
00:08:47.716    22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:08:47.716    22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:47.716    22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:08:47.716    22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:08:47.716    22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:08:47.716     22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:08:47.716     22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:08:47.716     22:35:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:47.716     22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:08:47.716     22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:08:47.716     22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:08:47.716    22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 bash -s
00:08:47.716  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:08:47.716   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1036 -- # SCSI_DISK=vda
00:08:47.716   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1038 -- # [[ -z vda ]]
00:08:47.716   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@68 -- # disks_before_restart=vda
00:08:47.716    22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@70 -- # printf :/dev/%s vda
00:08:47.716   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@70 -- # fio_disks=' --vm=1:/dev/vda'
00:08:47.716   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@71 -- # job_file=default_integrity.job
00:08:47.716   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@74 -- # run_fio --fio-bin=/usr/src/fio-static/fio --job-file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job --out=/root/vhost_test/fio_results --vm=1:/dev/vda
00:08:47.716   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1053 -- # local arg
00:08:47.716   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1054 -- # local job_file=
00:08:47.716   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1055 -- # local fio_bin=
00:08:47.716   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1056 -- # vms=()
00:08:47.716   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1056 -- # local vms
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1057 -- # local out=
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1058 -- # local vm
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1059 -- # local run_server_mode=true
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1060 -- # local run_plugin_mode=false
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1061 -- # local fio_start_cmd
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1062 -- # local fio_output_format=normal
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1063 -- # local fio_gtod_reduce=false
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1064 -- # local wait_for_fio=true
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1069 -- # local fio_bin=/usr/src/fio-static/fio
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1068 -- # local job_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1072 -- # local out=/root/vhost_test/fio_results
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1073 -- # mkdir -p /root/vhost_test/fio_results
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1070 -- # vms+=("${arg#*=}")
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1092 -- # [[ -n /usr/src/fio-static/fio ]]
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1092 -- # [[ ! -r /usr/src/fio-static/fio ]]
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1097 -- # [[ -z /usr/src/fio-static/fio ]]
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1101 -- # [[ ! -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job ]]
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1106 -- # fio_start_cmd='/usr/src/fio-static/fio --eta=never '
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1108 -- # local job_fname
00:08:47.717    22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1109 -- # basename /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1109 -- # job_fname=default_integrity.job
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1110 -- # log_fname=default_integrity.log
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1111 -- # fio_start_cmd+=' --output=/root/vhost_test/fio_results/default_integrity.log --output-format=normal '
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1114 -- # for vm in "${vms[@]}"
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1115 -- # local vm_num=1
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1116 -- # local vmdisks=/dev/vda
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1118 -- # sed 's@filename=@filename=/dev/vda@;s@description=\(.*\)@description=\1 (VM=1)@' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1119 -- # vm_exec 1 'cat > /root/default_integrity.job'
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:08:47.717    22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:08:47.717    22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:08:47.717    22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:47.717    22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:08:47.717    22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:08:47.717    22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/default_integrity.job'
00:08:47.717  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1121 -- # false
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1125 -- # vm_exec 1 cat /root/default_integrity.job
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:08:47.717    22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:08:47.717    22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:08:47.717    22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:47.717    22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:08:47.717    22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:08:47.717    22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 cat /root/default_integrity.job
00:08:47.717  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:08:47.717  [global]
00:08:47.717  blocksize_range=4k-512k
00:08:47.717  iodepth=512
00:08:47.717  iodepth_batch=128
00:08:47.717  iodepth_low=256
00:08:47.717  ioengine=libaio
00:08:47.717  size=1G
00:08:47.717  io_size=4G
00:08:47.717  filename=/dev/vda
00:08:47.717  group_reporting
00:08:47.717  thread
00:08:47.717  numjobs=1
00:08:47.717  direct=1
00:08:47.717  rw=randwrite
00:08:47.717  do_verify=1
00:08:47.717  verify=md5
00:08:47.717  verify_backlog=1024
00:08:47.717  fsync_on_close=1
00:08:47.717  verify_state_save=0
00:08:47.717  [nvme-host]
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1127 -- # true
00:08:47.717    22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1128 -- # vm_fio_socket 1
00:08:47.717    22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@326 -- # vm_num_is_valid 1
00:08:47.717    22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:47.717    22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:08:47.717    22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@327 -- # local vm_dir=/root/vhost_test/vms/1
00:08:47.717    22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@329 -- # cat /root/vhost_test/vms/1/fio_socket
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1128 -- # fio_start_cmd+='--client=127.0.0.1,10101 --remote-config /root/default_integrity.job '
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1131 -- # true
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1147 -- # true
00:08:47.717   22:35:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1161 -- # /usr/src/fio-static/fio --eta=never --output=/root/vhost_test/fio_results/default_integrity.log --output-format=normal --client=127.0.0.1,10101 --remote-config /root/default_integrity.job
00:08:57.700   22:35:57 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1162 -- # sleep 1
00:08:57.700   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1164 -- # [[ normal == \j\s\o\n ]]
00:08:57.700   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1172 -- # [[ ! -n '' ]]
00:08:57.700   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1173 -- # cat /root/vhost_test/fio_results/default_integrity.log
00:08:57.700  hostname=VM-1-6-7, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-3.35, flags=1
00:08:57.700  <VM-1-6-7> nvme-host: (g=0): rw=randwrite, bs=(R) 4096B-512KiB, (W) 4096B-512KiB, (T) 4096B-512KiB, ioengine=libaio, iodepth=512
00:08:57.700  <VM-1-6-7> Starting 1 thread
00:08:57.700  <VM-1-6-7> 
00:08:57.700  nvme-host: (groupid=0, jobs=1): err= 0: pid=945: Tue Dec 10 22:35:57 2024
00:08:57.700    read: IOPS=1339, BW=225MiB/s (236MB/s)(2048MiB/9115msec)
00:08:57.700      slat (usec): min=43, max=18459, avg=2468.74, stdev=3809.26
00:08:57.700      clat (msec): min=7, max=332, avg=130.45, stdev=70.01
00:08:57.700       lat (msec): min=7, max=333, avg=132.92, stdev=69.43
00:08:57.700      clat percentiles (msec):
00:08:57.700       |  1.00th=[   13],  5.00th=[   20], 10.00th=[   44], 20.00th=[   74],
00:08:57.700       | 30.00th=[   87], 40.00th=[  106], 50.00th=[  124], 60.00th=[  142],
00:08:57.700       | 70.00th=[  163], 80.00th=[  190], 90.00th=[  230], 95.00th=[  262],
00:08:57.700       | 99.00th=[  309], 99.50th=[  317], 99.90th=[  330], 99.95th=[  330],
00:08:57.700       | 99.99th=[  334]
00:08:57.700    write: IOPS=1425, BW=239MiB/s (251MB/s)(2048MiB/8563msec); 0 zone resets
00:08:57.700      slat (usec): min=254, max=94387, avg=21407.22, stdev=15412.69
00:08:57.700      clat (msec): min=7, max=288, avg=119.28, stdev=65.47
00:08:57.700       lat (msec): min=7, max=346, avg=140.68, stdev=69.39
00:08:57.700      clat percentiles (msec):
00:08:57.700       |  1.00th=[    9],  5.00th=[   20], 10.00th=[   29], 20.00th=[   65],
00:08:57.700       | 30.00th=[   82], 40.00th=[   96], 50.00th=[  110], 60.00th=[  132],
00:08:57.700       | 70.00th=[  148], 80.00th=[  176], 90.00th=[  209], 95.00th=[  236],
00:08:57.700       | 99.00th=[  271], 99.50th=[  288], 99.90th=[  288], 99.95th=[  288],
00:08:57.700       | 99.99th=[  288]
00:08:57.700     bw (  KiB/s): min=90610, max=364920, per=95.14%, avg=233006.78, stdev=87135.08, samples=18
00:08:57.700     iops        : min=  510, max= 2048, avg=1356.33, stdev=598.42, samples=18
00:08:57.700    lat (msec)   : 10=0.66%, 20=4.67%, 50=8.13%, 100=26.13%, 250=54.91%
00:08:57.700    lat (msec)   : 500=5.50%
00:08:57.700    cpu          : usr=94.01%, sys=1.99%, ctx=345, majf=0, minf=34
00:08:57.700    IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.5%, >=64=99.1%
00:08:57.700       submit    : 0=0.0%, 4=0.0%, 8=1.2%, 16=0.0%, 32=0.0%, 64=19.2%, >=64=79.6%
00:08:57.700       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:08:57.700       issued rwts: total=12208,12208,0,0 short=0,0,0,0 dropped=0,0,0,0
00:08:57.700       latency   : target=0, window=0, percentile=100.00%, depth=512
00:08:57.700  
00:08:57.700  Run status group 0 (all jobs):
00:08:57.700     READ: bw=225MiB/s (236MB/s), 225MiB/s-225MiB/s (236MB/s-236MB/s), io=2048MiB (2147MB), run=9115-9115msec
00:08:57.700    WRITE: bw=239MiB/s (251MB/s), 239MiB/s-239MiB/s (251MB/s-251MB/s), io=2048MiB (2147MB), run=8563-8563msec
00:08:57.700  
00:08:57.700  Disk stats (read/write):
00:08:57.700    vda: ios=11900/12141, merge=51/72, ticks=147037/104926, in_queue=251964, util=29.74%
00:08:57.700   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@77 -- # notice 'Shutting down virtual machine...'
00:08:57.700   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine...'
00:08:57.700   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:08:57.700   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:08:57.700   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:08:57.700   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:57.700   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:08:57.700   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine...'
00:08:57.700  INFO: Shutting down virtual machine...
00:08:57.700   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@78 -- # vm_shutdown_all
00:08:57.700   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:08:57.701   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:08:57.701    22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@489 -- # vm_list_all
00:08:57.701    22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@466 -- # vms=()
00:08:57.701    22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@466 -- # local vms
00:08:57.701    22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:08:57.701    22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:08:57.701    22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:08:57.701   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:08:57.701   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@492 -- # vm_shutdown 1
00:08:57.701   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:08:57.701   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:57.701   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:08:57.701   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:08:57.701   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:08:57.701   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@424 -- # vm_is_running 1
00:08:57.701   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:08:57.701   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:57.701   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:08:57.701   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:08:57.701   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:08:57.701   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:08:57.701    22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:08:57.701   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # vm_pid=78028
00:08:57.701   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 78028
00:08:57.701   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@380 -- # return 0
00:08:57.701   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:08:57.701   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:08:57.701   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:08:57.701   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:08:57.701   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:08:57.701   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:57.701   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:08:57.701   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:08:57.701  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:08:57.701   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@432 -- # set +e
00:08:57.701   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:08:57.701   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:08:57.701   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:57.701   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:08:57.701   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:08:57.701   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:08:57.701    22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:08:57.701    22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:08:57.701    22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:57.701    22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:08:57.701    22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:08:57.701    22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:08:57.701   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:08:57.701  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:08:57.960   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:08:57.960   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:08:57.960   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:08:57.960   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:08:57.960   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:08:57.960   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:57.960   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:08:57.960   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:08:57.960  INFO: VM1 is shutting down - wait a while to complete
00:08:57.960   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@435 -- # set -e
00:08:57.960   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:08:57.960   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:08:57.960   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:08:57.960   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:08:57.960   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:08:57.960   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:57.960   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:08:57.960   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:08:57.960  INFO: Waiting for VMs to shutdown...
00:08:57.960   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:08:57.960   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:08:57.960   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:08:57.960   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:08:57.960   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:57.960   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:08:57.960   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:08:57.960   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:08:57.960   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:08:57.960    22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:08:57.960   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # vm_pid=78028
00:08:57.960   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 78028
00:08:57.960   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@380 -- # return 0
00:08:57.960   22:35:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:08:58.908   22:35:59 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:08:58.909   22:35:59 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:08:58.909   22:35:59 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:08:58.909   22:35:59 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:08:58.909   22:35:59 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:58.909   22:35:59 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:08:58.909   22:35:59 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:08:58.909   22:35:59 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:08:58.909   22:35:59 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:08:58.909    22:35:59 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:08:58.909   22:35:59 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # vm_pid=78028
00:08:58.909   22:35:59 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 78028
00:08:58.909   22:35:59 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@380 -- # return 0
00:08:58.909   22:35:59 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:08:59.846   22:36:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:08:59.846   22:36:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:08:59.846   22:36:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:08:59.846   22:36:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:08:59.846   22:36:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:59.846   22:36:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:08:59.846   22:36:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:08:59.846   22:36:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:08:59.846   22:36:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@373 -- # return 1
00:08:59.846   22:36:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:08:59.846   22:36:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:09:00.782   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:09:00.782   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:09:00.782   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:09:00.782   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:09:00.782   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:00.782   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:09:00.782   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:00.782   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:00.782   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:09:00.782   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:09:00.782  INFO: All VMs successfully shut down
00:09:00.782   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@505 -- # return 0
00:09:00.782   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@81 -- # vm_setup --disk-type=vfio_user_virtio --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1
00:09:01.040   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@518 -- # xtrace_disable
00:09:01.040   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:09:01.040  WARN: removing existing VM in '/root/vhost_test/vms/1'
00:09:01.040  INFO: Creating new VM in /root/vhost_test/vms/1
00:09:01.040  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:09:01.040  INFO: TASK MASK: 6-7
00:09:01.040   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@671 -- # local node_num=0
00:09:01.040   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@672 -- # local boot_disk_present=false
00:09:01.040   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:09:01.040   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:09:01.040   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:01.040   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:09:01.040   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:01.040   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:01.040   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:09:01.040   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:09:01.040  INFO: NUMA NODE: 0
00:09:01.040   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:09:01.040   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@677 -- # [[ -n '' ]]
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@686 -- # [[ -z '' ]]
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@701 -- # IFS=,
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@701 -- # read -r disk disk_type _
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@702 -- # [[ -z '' ]]
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@702 -- # disk_type=vfio_user_virtio
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@704 -- # case $disk_type in
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@766 -- # notice 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:09:01.041  INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@767 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/vfu_tgt/virtio.$disk")
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@768 -- # [[ 1 == '' ]]
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@780 -- # [[ -n '' ]]
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@785 -- # (( 0 ))
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:09:01.041  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@787 -- # cat
00:09:01.041    22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/vfu_tgt/virtio.1
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@827 -- # echo 10100
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@828 -- # echo 10101
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@829 -- # echo 10102
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@832 -- # [[ -z '' ]]
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@834 -- # echo 10104
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@835 -- # echo 101
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@837 -- # [[ -z '' ]]
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@838 -- # [[ -z '' ]]
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@82 -- # vm_run 1
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@843 -- # local run_all=false
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@844 -- # local vms_to_run=
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@846 -- # getopts a-: optchar
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@856 -- # false
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@859 -- # shift 0
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@860 -- # for vm in "$@"
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@861 -- # vm_num_is_valid 1
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@871 -- # vm_is_running 1
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@373 -- # return 1
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:09:01.041  INFO: running /root/vhost_test/vms/1/run.sh
00:09:01.041   22:36:01 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:09:01.041  Running VM in /root/vhost_test/vms/1
00:09:01.301  [2024-12-10 22:36:02.010383] tgt_endpoint.c: 167:tgt_accept_poller: *NOTICE*: /root/vhost_test/vms/vfu_tgt/virtio.1: attached successfully
00:09:01.560  Waiting for QEMU pid file
00:09:02.498  === qemu.log ===
00:09:02.498  === qemu.log ===
00:09:02.498   22:36:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@83 -- # vm_wait_for_boot 60 1
00:09:02.498   22:36:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@913 -- # assert_number 60
00:09:02.498   22:36:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:09:02.498   22:36:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@281 -- # return 0
00:09:02.498   22:36:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@915 -- # xtrace_disable
00:09:02.498   22:36:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:09:02.498  INFO: Waiting for VMs to boot
00:09:02.498  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:09:41.236  
00:09:41.236  INFO: VM1 ready
00:09:41.236  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:09:41.236  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:09:41.236  INFO: all VMs ready
00:09:41.236   22:36:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@973 -- # return 0
00:09:41.236   22:36:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@86 -- # disks_after_restart=
00:09:41.236   22:36:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@87 -- # get_disks virtio_blk 1
00:09:41.236   22:36:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@24 -- # [[ virtio_blk == \v\i\r\t\i\o\_\s\c\s\i ]]
00:09:41.236   22:36:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@26 -- # [[ virtio_blk == \v\i\r\t\i\o\_\b\l\k ]]
00:09:41.236   22:36:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@27 -- # vm_check_blk_location 1
00:09:41.236   22:36:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1035 -- # local 'script=shopt -s nullglob; cd /sys/block; echo vd*'
00:09:41.236    22:36:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1036 -- # echo 'shopt -s nullglob; cd /sys/block; echo vd*'
00:09:41.236    22:36:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1036 -- # vm_exec 1 bash -s
00:09:41.236    22:36:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:09:41.236    22:36:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:41.236    22:36:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:41.236    22:36:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:09:41.236    22:36:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:09:41.236     22:36:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:09:41.236     22:36:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:09:41.236     22:36:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:41.236     22:36:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:41.236     22:36:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:09:41.236     22:36:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:09:41.236    22:36:39 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 bash -s
00:09:41.236  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:09:41.236   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1036 -- # SCSI_DISK=vda
00:09:41.236   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1038 -- # [[ -z vda ]]
00:09:41.236   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@88 -- # disks_after_restart=vda
00:09:41.236   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@90 -- # [[ vda != \v\d\a ]]
00:09:41.236   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@96 -- # notice 'Shutting down virtual machine...'
00:09:41.236   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine...'
00:09:41.236   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:41.236   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:09:41.236   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:41.236   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:41.236   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:09:41.236   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine...'
00:09:41.236  INFO: Shutting down virtual machine...
00:09:41.236   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@97 -- # vm_shutdown_all
00:09:41.236   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:09:41.236   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:09:41.236    22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@489 -- # vm_list_all
00:09:41.236    22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@466 -- # vms=()
00:09:41.236    22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@466 -- # local vms
00:09:41.236    22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:09:41.236    22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:09:41.236    22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:09:41.236   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:09:41.236   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@492 -- # vm_shutdown 1
00:09:41.236   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:09:41.236   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:41.236   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@424 -- # vm_is_running 1
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:09:41.237    22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # vm_pid=87773
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 87773
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@380 -- # return 0
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:09:41.237  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@432 -- # set +e
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:09:41.237    22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:09:41.237    22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:09:41.237    22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:41.237    22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:41.237    22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:09:41.237    22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:09:41.237  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:09:41.237  INFO: VM1 is shutting down - wait a while to complete
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@435 -- # set -e
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:09:41.237  INFO: Waiting for VMs to shutdown...
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:09:41.237    22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # vm_pid=87773
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 87773
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@380 -- # return 0
00:09:41.237   22:36:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:09:41.237   22:36:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:09:41.237   22:36:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:09:41.237   22:36:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:09:41.237   22:36:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:09:41.237   22:36:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:41.237   22:36:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:41.237   22:36:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:09:41.237   22:36:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:09:41.237   22:36:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:09:41.237    22:36:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:09:41.237   22:36:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # vm_pid=87773
00:09:41.237   22:36:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 87773
00:09:41.237   22:36:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@380 -- # return 0
00:09:41.237   22:36:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:09:41.805   22:36:42 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:09:41.805   22:36:42 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:09:41.805   22:36:42 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:09:41.805   22:36:42 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:09:41.805   22:36:42 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:41.805   22:36:42 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:41.805   22:36:42 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:09:41.805   22:36:42 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:09:41.805   22:36:42 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@373 -- # return 1
00:09:41.805   22:36:42 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:09:41.805   22:36:42 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:09:42.739   22:36:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:09:42.739   22:36:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:09:42.739   22:36:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:09:42.739   22:36:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:09:42.739   22:36:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:42.739   22:36:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:09:42.739   22:36:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:42.739   22:36:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:42.739   22:36:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:09:42.739   22:36:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:09:42.739  INFO: All VMs successfully shut down
00:09:42.739   22:36:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@505 -- # return 0
00:09:42.739   22:36:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@99 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_detach_controller Nvme0
00:09:42.998  [2024-12-10 22:36:43.602819] vfu_virtio_blk.c: 384:bdev_event_cb: *NOTICE*: bdev name (Nvme0n1) received event(SPDK_BDEV_EVENT_REMOVE)
00:09:44.373   22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@101 -- # vhost_kill 0
00:09:44.373   22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@202 -- # local rc=0
00:09:44.373   22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@203 -- # local vhost_name=0
00:09:44.373   22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@205 -- # [[ -z 0 ]]
00:09:44.373   22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@210 -- # local vhost_dir
00:09:44.373    22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@211 -- # get_vhost_dir 0
00:09:44.373    22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:09:44.373    22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:09:44.373    22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:09:44.373   22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0
00:09:44.373   22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:09:44.373   22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]]
00:09:44.373   22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@219 -- # timing_enter vhost_kill
00:09:44.373   22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:44.373   22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:09:44.373   22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@220 -- # local vhost_pid
00:09:44.373    22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@221 -- # cat /root/vhost_test/vhost/0/vhost.pid
00:09:44.373   22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@221 -- # vhost_pid=77147
00:09:44.373   22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@222 -- # notice 'killing vhost (PID 77147) app'
00:09:44.373   22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'killing vhost (PID 77147) app'
00:09:44.373   22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:44.373   22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:09:44.373   22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:44.373   22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:44.373   22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:09:44.373   22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: killing vhost (PID 77147) app'
00:09:44.373  INFO: killing vhost (PID 77147) app
00:09:44.373   22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@224 -- # kill -INT 77147
00:09:44.373   22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@225 -- # notice 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:09:44.373   22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:09:44.373   22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:44.373   22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:09:44.373   22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:44.373   22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:44.373   22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:09:44.373   22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: sent SIGINT to vhost app - waiting 60 seconds to exit'
00:09:44.373  INFO: sent SIGINT to vhost app - waiting 60 seconds to exit
00:09:44.373   22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@226 -- # (( i = 0 ))
00:09:44.373   22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:09:44.373   22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@227 -- # kill -0 77147
00:09:44.373   22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@228 -- # echo .
00:09:44.373  .
00:09:44.373   22:36:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@229 -- # sleep 1
00:09:45.309   22:36:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@226 -- # (( i++ ))
00:09:45.309   22:36:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:09:45.309   22:36:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@227 -- # kill -0 77147
00:09:45.309   22:36:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@228 -- # echo .
00:09:45.309  .
00:09:45.309   22:36:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@229 -- # sleep 1
00:09:46.687   22:36:47 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@226 -- # (( i++ ))
00:09:46.687   22:36:47 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:09:46.687   22:36:47 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@227 -- # kill -0 77147
00:09:46.687   22:36:47 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@228 -- # echo .
00:09:46.687  .
00:09:46.687   22:36:47 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@229 -- # sleep 1
00:09:47.646   22:36:48 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@226 -- # (( i++ ))
00:09:47.646   22:36:48 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:09:47.646   22:36:48 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@227 -- # kill -0 77147
00:09:47.646  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 227: kill: (77147) - No such process
00:09:47.646   22:36:48 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@231 -- # break
00:09:47.646   22:36:48 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@234 -- # kill -0 77147
00:09:47.646  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 234: kill: (77147) - No such process
00:09:47.646   22:36:48 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@239 -- # kill -0 77147
00:09:47.646  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 239: kill: (77147) - No such process
00:09:47.646   22:36:48 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@245 -- # is_pid_child 77147
00:09:47.646   22:36:48 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1686 -- # local pid=77147 _pid
00:09:47.646    22:36:48 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1685 -- # jobs -pr
00:09:47.646   22:36:48 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1688 -- # read -r _pid
00:09:47.646   22:36:48 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1689 -- # (( pid == _pid ))
00:09:47.646   22:36:48 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1688 -- # read -r _pid
00:09:47.646   22:36:48 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1692 -- # return 1
00:09:47.646   22:36:48 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@257 -- # timing_exit vhost_kill
00:09:47.646   22:36:48 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:47.646   22:36:48 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:09:47.646   22:36:48 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@259 -- # rm -rf /root/vhost_test/vhost/0
00:09:47.646   22:36:48 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@261 -- # return 0
00:09:47.646   22:36:48 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@103 -- # vhosttestfini
00:09:47.646   22:36:48 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@54 -- # '[' '' == iso ']'
00:09:47.646  
00:09:47.646  real	1m45.496s
00:09:47.646  user	6m53.084s
00:09:47.646  sys	0m2.144s
00:09:47.646   22:36:48 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:47.646   22:36:48 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:09:47.646  ************************************
00:09:47.646  END TEST vfio_user_virtio_blk_restart_vm
00:09:47.646  ************************************
00:09:47.646   22:36:48 vfio_user_qemu -- vfio_user/vfio_user.sh@18 -- # run_test vfio_user_virtio_scsi_restart_vm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_restart_vm.sh virtio_scsi
00:09:47.646   22:36:48 vfio_user_qemu -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:09:47.646   22:36:48 vfio_user_qemu -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:47.646   22:36:48 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x
00:09:47.646  ************************************
00:09:47.646  START TEST vfio_user_virtio_scsi_restart_vm
00:09:47.646  ************************************
00:09:47.646   22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_restart_vm.sh virtio_scsi
00:09:47.646  * Looking for test storage...
00:09:47.646  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:09:47.646    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:47.646     22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1711 -- # lcov --version
00:09:47.646     22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:47.646    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:47.646    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:47.646    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:47.646    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:47.646    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@336 -- # IFS=.-:
00:09:47.646    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@336 -- # read -ra ver1
00:09:47.646    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@337 -- # IFS=.-:
00:09:47.646    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@337 -- # read -ra ver2
00:09:47.646    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@338 -- # local 'op=<'
00:09:47.646    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@340 -- # ver1_l=2
00:09:47.646    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@341 -- # ver2_l=1
00:09:47.646    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:47.646    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@344 -- # case "$op" in
00:09:47.646    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@345 -- # : 1
00:09:47.646    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:47.646    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:47.646     22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@365 -- # decimal 1
00:09:47.646     22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@353 -- # local d=1
00:09:47.646     22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:47.646     22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@355 -- # echo 1
00:09:47.646    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@365 -- # ver1[v]=1
00:09:47.646     22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@366 -- # decimal 2
00:09:47.646     22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@353 -- # local d=2
00:09:47.646     22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:47.646     22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@355 -- # echo 2
00:09:47.646    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@366 -- # ver2[v]=2
00:09:47.646    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:47.646    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:47.647    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@368 -- # return 0
00:09:47.647    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:47.647    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:47.647  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:47.647  		--rc genhtml_branch_coverage=1
00:09:47.647  		--rc genhtml_function_coverage=1
00:09:47.647  		--rc genhtml_legend=1
00:09:47.647  		--rc geninfo_all_blocks=1
00:09:47.647  		--rc geninfo_unexecuted_blocks=1
00:09:47.647  		
00:09:47.647  		'
00:09:47.647    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:47.647  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:47.647  		--rc genhtml_branch_coverage=1
00:09:47.647  		--rc genhtml_function_coverage=1
00:09:47.647  		--rc genhtml_legend=1
00:09:47.647  		--rc geninfo_all_blocks=1
00:09:47.647  		--rc geninfo_unexecuted_blocks=1
00:09:47.647  		
00:09:47.647  		'
00:09:47.647    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:47.647  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:47.647  		--rc genhtml_branch_coverage=1
00:09:47.647  		--rc genhtml_function_coverage=1
00:09:47.647  		--rc genhtml_legend=1
00:09:47.647  		--rc geninfo_all_blocks=1
00:09:47.647  		--rc geninfo_unexecuted_blocks=1
00:09:47.647  		
00:09:47.647  		'
00:09:47.647    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:47.647  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:47.647  		--rc genhtml_branch_coverage=1
00:09:47.647  		--rc genhtml_function_coverage=1
00:09:47.647  		--rc genhtml_legend=1
00:09:47.647  		--rc geninfo_all_blocks=1
00:09:47.647  		--rc geninfo_unexecuted_blocks=1
00:09:47.647  		
00:09:47.647  		'
00:09:47.647   22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh
00:09:47.647    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/common.sh@6 -- # : 128
00:09:47.647    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/common.sh@7 -- # : 512
00:09:47.647    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:09:47.647     22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@6 -- # : false
00:09:47.647     22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@7 -- # : /root/vhost_test
00:09:47.647     22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@8 -- # : /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:09:47.647     22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@9 -- # : qemu-img
00:09:47.647      22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:09:47.647     22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:09:47.647     22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:09:47.647     22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:09:47.647     22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:09:47.647     22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:09:47.647     22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:09:47.647       22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_restart_vm.sh
00:09:47.647      22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:09:47.647     22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:09:47.647     22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:09:47.647     22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:09:47.647     22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:09:47.647     22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:09:47.647     22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:09:47.647      22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:09:47.647      22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@2 -- # vhost_0_main_core=0
00:09:47.647      22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:09:47.647      22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:09:47.647      22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:09:47.647      22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:09:47.647      22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:09:47.647      22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:09:47.647      22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:09:47.647      22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:09:47.647      22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:09:47.647      22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:09:47.647      22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:09:47.647      22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:09:47.647      22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:09:47.647      22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:09:47.647      22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:09:47.647      22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:09:47.647      22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:09:47.647      22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:09:47.647      22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:09:47.647      22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:09:47.647      22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:09:47.647      22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:09:47.647      22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:09:47.647      22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:09:47.647     22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:09:47.647      22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:09:47.647      22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:09:47.647      22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:09:47.647      22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:09:47.647      22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:09:47.647      22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:09:47.647       22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:09:47.647        22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/cgroups.sh@244 -- # check_cgroup
00:09:47.647        22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:09:47.647        22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:09:47.647        22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/cgroups.sh@10 -- # echo 2
00:09:47.647       22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:09:47.647    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:09:47.647    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]]
00:09:47.647    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:09:47.647   22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/common.sh
00:09:47.647   22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@12 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/autotest.config
00:09:47.647    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@1 -- # vhost_0_reactor_mask='[0-3]'
00:09:47.647    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@2 -- # vhost_0_main_core=0
00:09:47.647    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@4 -- # VM_0_qemu_mask=4-5
00:09:47.647    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:09:47.647    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@7 -- # VM_1_qemu_mask=6-7
00:09:47.647    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:09:47.647    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@10 -- # VM_2_qemu_mask=8-9
00:09:47.647    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:09:47.647   22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@14 -- # bdfs=($(get_nvme_bdfs))
00:09:47.647    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@14 -- # get_nvme_bdfs
00:09:47.648    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1498 -- # bdfs=()
00:09:47.648    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1498 -- # local bdfs
00:09:47.648    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:09:47.648     22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/gen_nvme.sh
00:09:47.648     22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:09:47.648    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:09:47.648    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0d:00.0
00:09:47.648    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@15 -- # get_vhost_dir 0
00:09:47.648    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:09:47.648    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:09:47.648    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:09:47.648   22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@15 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:09:47.648   22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@17 -- # virtio_type=virtio_scsi
00:09:47.648   22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@18 -- # [[ virtio_scsi != virtio_blk ]]
00:09:47.648   22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@18 -- # [[ virtio_scsi != virtio_scsi ]]
00:09:47.648   22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@31 -- # vhosttestinit
00:09:47.648   22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@37 -- # '[' '' == iso ']'
00:09:47.648   22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]]
00:09:47.648   22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:09:47.648   22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:09:47.648   22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@33 -- # vfu_tgt_run 0
00:09:47.648   22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@6 -- # local vhost_name=0
00:09:47.648   22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@7 -- # local vfio_user_dir vfu_pid_file rpc_py
00:09:47.648    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@9 -- # get_vhost_dir 0
00:09:47.648    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:09:47.648    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:09:47.648    22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:09:47.648   22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@9 -- # vfio_user_dir=/root/vhost_test/vhost/0
00:09:47.648   22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@10 -- # vfu_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:09:47.648   22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@11 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:09:47.648   22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@13 -- # mkdir -p /root/vhost_test/vhost/0
00:09:47.648   22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@15 -- # timing_enter vfu_tgt_start
00:09:47.648   22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:47.648   22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:09:47.648   22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@16 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -r /root/vhost_test/vhost/0/rpc.sock -m 0xf -s 512
00:09:47.648   22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@17 -- # vfupid=96308
00:09:47.648   22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@18 -- # echo 96308
00:09:47.648   22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@20 -- # echo 'Process pid: 96308'
00:09:47.648  Process pid: 96308
00:09:47.648   22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@21 -- # echo 'waiting for app to run...'
00:09:47.648  waiting for app to run...
00:09:47.648   22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@22 -- # waitforlisten 96308 /root/vhost_test/vhost/0/rpc.sock
00:09:47.648   22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@835 -- # '[' -z 96308 ']'
00:09:47.648   22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@839 -- # local rpc_addr=/root/vhost_test/vhost/0/rpc.sock
00:09:47.648   22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:47.648   22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...'
00:09:47.648  Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...
00:09:47.648   22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:47.648   22:36:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:09:47.907  [2024-12-10 22:36:48.475550] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:09:47.907  [2024-12-10 22:36:48.475657] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xf -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96308 ]
00:09:47.907  EAL: No free 2048 kB hugepages reported on node 1
00:09:48.167  [2024-12-10 22:36:48.802001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:09:48.167  [2024-12-10 22:36:48.942727] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:09:48.167  [2024-12-10 22:36:48.942796] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:09:48.167  [2024-12-10 22:36:48.942821] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:09:48.167  [2024-12-10 22:36:48.942828] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:09:49.102   22:36:49 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:49.102   22:36:49 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@868 -- # return 0
00:09:49.102   22:36:49 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@24 -- # timing_exit vfu_tgt_start
00:09:49.102   22:36:49 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:49.102   22:36:49 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:09:49.102   22:36:49 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@35 -- # vfu_vm_dir=/root/vhost_test/vms/vfu_tgt
00:09:49.102   22:36:49 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@36 -- # rm -rf /root/vhost_test/vms/vfu_tgt
00:09:49.102   22:36:49 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@37 -- # mkdir -p /root/vhost_test/vms/vfu_tgt
00:09:49.102   22:36:49 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@39 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_attach_controller -b Nvme0 -t pcie -a 0000:0d:00.0
00:09:52.393  Nvme0n1
00:09:52.393   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@42 -- # disk_no=1
00:09:52.393   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@43 -- # vm_num=1
00:09:52.393   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@44 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_tgt_set_base_path /root/vhost_test/vms/vfu_tgt
00:09:52.651   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@46 -- # [[ virtio_scsi == \v\i\r\t\i\o\_\b\l\k ]]
00:09:52.651   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@48 -- # [[ virtio_scsi == \v\i\r\t\i\o\_\s\c\s\i ]]
00:09:52.652   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@49 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_virtio_create_scsi_endpoint virtio.1 --num-io-queues=2 --qsize=512 --packed-ring
00:09:52.910   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@50 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_virtio_scsi_add_target virtio.1 --scsi-target-num=0 --bdev-name Nvme0n1
00:09:52.910  [2024-12-10 22:36:53.658978] vfu_virtio_scsi.c: 886:vfu_virtio_scsi_add_target: *NOTICE*: virtio.1: added SCSI target 0 using bdev 'Nvme0n1'
00:09:52.910   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@53 -- # vm_setup --disk-type=vfio_user_virtio --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1
00:09:52.910   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@518 -- # xtrace_disable
00:09:52.910   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:09:52.910  WARN: removing existing VM in '/root/vhost_test/vms/1'
00:09:52.910  INFO: Creating new VM in /root/vhost_test/vms/1
00:09:52.910  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:09:52.910  INFO: TASK MASK: 6-7
00:09:53.169   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@671 -- # local node_num=0
00:09:53.169   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@672 -- # local boot_disk_present=false
00:09:53.169   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:09:53.169   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:09:53.169   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:53.169   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:09:53.169   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:53.169   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:53.169   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:09:53.169   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:09:53.169  INFO: NUMA NODE: 0
00:09:53.169   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:09:53.169   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:09:53.169   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:09:53.169   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:09:53.169   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@677 -- # [[ -n '' ]]
00:09:53.169   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@686 -- # [[ -z '' ]]
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@701 -- # IFS=,
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@701 -- # read -r disk disk_type _
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@702 -- # [[ -z '' ]]
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@702 -- # disk_type=vfio_user_virtio
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@704 -- # case $disk_type in
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@766 -- # notice 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:09:53.170  INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@767 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/vfu_tgt/virtio.$disk")
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@768 -- # [[ 1 == '' ]]
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@780 -- # [[ -n '' ]]
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@785 -- # (( 0 ))
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:09:53.170  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@787 -- # cat
00:09:53.170    22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/vfu_tgt/virtio.1
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@827 -- # echo 10100
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@828 -- # echo 10101
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@829 -- # echo 10102
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@832 -- # [[ -z '' ]]
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@834 -- # echo 10104
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@835 -- # echo 101
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@837 -- # [[ -z '' ]]
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@838 -- # [[ -z '' ]]
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@54 -- # vm_run 1
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@843 -- # local run_all=false
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@844 -- # local vms_to_run=
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@846 -- # getopts a-: optchar
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@856 -- # false
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@859 -- # shift 0
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@860 -- # for vm in "$@"
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@861 -- # vm_num_is_valid 1
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@871 -- # vm_is_running 1
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@373 -- # return 1
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:09:53.170  INFO: running /root/vhost_test/vms/1/run.sh
00:09:53.170   22:36:53 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:09:53.170  Running VM in /root/vhost_test/vms/1
00:09:53.429  [2024-12-10 22:36:54.139368] tgt_endpoint.c: 167:tgt_accept_poller: *NOTICE*: /root/vhost_test/vms/vfu_tgt/virtio.1: attached successfully
00:09:53.688  Waiting for QEMU pid file
00:09:54.623  === qemu.log ===
00:09:54.623  === qemu.log ===
00:09:54.623   22:36:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@55 -- # vm_wait_for_boot 60 1
00:09:54.623   22:36:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@913 -- # assert_number 60
00:09:54.623   22:36:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:09:54.623   22:36:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@281 -- # return 0
00:09:54.623   22:36:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@915 -- # xtrace_disable
00:09:54.623   22:36:55 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:09:54.623  INFO: Waiting for VMs to boot
00:09:54.623  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:10:09.505  [2024-12-10 22:37:09.144645] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:10:31.440  
00:10:31.440  INFO: VM1 ready
00:10:31.440  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:31.440  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:31.440  INFO: all VMs ready
00:10:31.440   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@973 -- # return 0
00:10:31.440   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@58 -- # fio_bin=--fio-bin=/usr/src/fio-static/fio
00:10:31.440   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@59 -- # fio_disks=
00:10:31.440   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@60 -- # qemu_mask_param=VM_1_qemu_mask
00:10:31.440   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@62 -- # host_name=VM-1-6-7
00:10:31.440   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@63 -- # vm_exec 1 'hostname VM-1-6-7'
00:10:31.440   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:10:31.440   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:31.440   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:31.440   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:10:31.440   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:10:31.440    22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:10:31.440    22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:10:31.440    22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:31.440    22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:31.440    22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:10:31.441    22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'hostname VM-1-6-7'
00:10:31.441  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@64 -- # vm_start_fio_server --fio-bin=/usr/src/fio-static/fio 1
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@977 -- # local OPTIND optchar
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@978 -- # local readonly=
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@979 -- # local fio_bin=
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@980 -- # getopts :-: optchar
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@981 -- # case "$optchar" in
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@983 -- # case "$OPTARG" in
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@984 -- # local fio_bin=/usr/src/fio-static/fio
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@980 -- # getopts :-: optchar
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@993 -- # shift 1
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@994 -- # for vm_num in "$@"
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@995 -- # notice 'Starting fio server on VM1'
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Starting fio server on VM1'
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Starting fio server on VM1'
00:10:31.441  INFO: Starting fio server on VM1
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@996 -- # [[ /usr/src/fio-static/fio != '' ]]
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@997 -- # vm_exec 1 'cat > /root/fio; chmod +x /root/fio'
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:10:31.441    22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:10:31.441    22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:10:31.441    22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:31.441    22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:31.441    22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:10:31.441    22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/fio; chmod +x /root/fio'
00:10:31.441  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@998 -- # vm_exec 1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:10:31.441    22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:10:31.441    22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:10:31.441    22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:31.441    22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:31.441    22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:10:31.441    22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:10:31.441  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@66 -- # disks_before_restart=
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@67 -- # get_disks virtio_scsi 1
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@24 -- # [[ virtio_scsi == \v\i\r\t\i\o\_\s\c\s\i ]]
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@25 -- # vm_check_scsi_location 1
00:10:31.441   22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1014 -- # local 'script=shopt -s nullglob;
00:10:31.441  	for entry in /sys/block/sd*; do
00:10:31.441  		disk_type="$(cat $entry/device/vendor)";
00:10:31.441  		if [[ $disk_type == INTEL* ]] || [[ $disk_type == RAWSCSI* ]] || [[ $disk_type == LIO-ORG* ]]; then
00:10:31.441  			fname=$(basename $entry);
00:10:31.441  			echo -n " $fname";
00:10:31.441  		fi;
00:10:31.441  	done'
00:10:31.441    22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1016 -- # echo 'shopt -s nullglob;
00:10:31.441  	for entry in /sys/block/sd*; do
00:10:31.441  		disk_type="$(cat $entry/device/vendor)";
00:10:31.441  		if [[ $disk_type == INTEL* ]] || [[ $disk_type == RAWSCSI* ]] || [[ $disk_type == LIO-ORG* ]]; then
00:10:31.441  			fname=$(basename $entry);
00:10:31.441  			echo -n " $fname";
00:10:31.441  		fi;
00:10:31.441  	done'
00:10:31.441    22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1016 -- # vm_exec 1 bash -s
00:10:31.441    22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:10:31.441    22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:31.441    22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:31.441    22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:10:31.441    22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:10:31.441     22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:10:31.441     22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:10:31.441     22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:31.441     22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:31.441     22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:10:31.441     22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:10:31.441    22:37:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 bash -s
00:10:31.441  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1016 -- # SCSI_DISK=' sdb'
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1018 -- # [[ -z  sdb ]]
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@68 -- # disks_before_restart=' sdb'
00:10:31.701    22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@70 -- # printf :/dev/%s sdb
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@70 -- # fio_disks=' --vm=1:/dev/sdb'
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@71 -- # job_file=default_integrity.job
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@74 -- # run_fio --fio-bin=/usr/src/fio-static/fio --job-file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job --out=/root/vhost_test/fio_results --vm=1:/dev/sdb
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1053 -- # local arg
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1054 -- # local job_file=
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1055 -- # local fio_bin=
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1056 -- # vms=()
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1056 -- # local vms
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1057 -- # local out=
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1058 -- # local vm
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1059 -- # local run_server_mode=true
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1060 -- # local run_plugin_mode=false
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1061 -- # local fio_start_cmd
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1062 -- # local fio_output_format=normal
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1063 -- # local fio_gtod_reduce=false
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1064 -- # local wait_for_fio=true
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1069 -- # local fio_bin=/usr/src/fio-static/fio
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1068 -- # local job_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1072 -- # local out=/root/vhost_test/fio_results
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1073 -- # mkdir -p /root/vhost_test/fio_results
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1070 -- # vms+=("${arg#*=}")
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1092 -- # [[ -n /usr/src/fio-static/fio ]]
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1092 -- # [[ ! -r /usr/src/fio-static/fio ]]
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1097 -- # [[ -z /usr/src/fio-static/fio ]]
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1101 -- # [[ ! -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job ]]
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1106 -- # fio_start_cmd='/usr/src/fio-static/fio --eta=never '
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1108 -- # local job_fname
00:10:31.701    22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1109 -- # basename /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1109 -- # job_fname=default_integrity.job
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1110 -- # log_fname=default_integrity.log
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1111 -- # fio_start_cmd+=' --output=/root/vhost_test/fio_results/default_integrity.log --output-format=normal '
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1114 -- # for vm in "${vms[@]}"
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1115 -- # local vm_num=1
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1116 -- # local vmdisks=/dev/sdb
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1118 -- # sed 's@filename=@filename=/dev/sdb@;s@description=\(.*\)@description=\1 (VM=1)@' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1119 -- # vm_exec 1 'cat > /root/default_integrity.job'
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:10:31.701    22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:10:31.701    22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:10:31.701    22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:31.701    22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:31.701    22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:10:31.701    22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:10:31.701   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/default_integrity.job'
00:10:31.701  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:31.960   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1121 -- # false
00:10:31.960   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1125 -- # vm_exec 1 cat /root/default_integrity.job
00:10:31.960   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:10:31.960   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:31.960   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:31.960   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:10:31.960   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:10:31.961    22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:10:31.961    22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:10:31.961    22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:31.961    22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:31.961    22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:10:31.961    22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:10:31.961   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 cat /root/default_integrity.job
00:10:31.961  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:31.961  [global]
00:10:31.961  blocksize_range=4k-512k
00:10:31.961  iodepth=512
00:10:31.961  iodepth_batch=128
00:10:31.961  iodepth_low=256
00:10:31.961  ioengine=libaio
00:10:31.961  size=1G
00:10:31.961  io_size=4G
00:10:31.961  filename=/dev/sdb
00:10:31.961  group_reporting
00:10:31.961  thread
00:10:31.961  numjobs=1
00:10:31.961  direct=1
00:10:31.961  rw=randwrite
00:10:31.961  do_verify=1
00:10:31.961  verify=md5
00:10:31.961  verify_backlog=1024
00:10:31.961  fsync_on_close=1
00:10:31.961  verify_state_save=0
00:10:31.961  [nvme-host]
00:10:31.961   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1127 -- # true
00:10:31.961    22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1128 -- # vm_fio_socket 1
00:10:31.961    22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@326 -- # vm_num_is_valid 1
00:10:31.961    22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:31.961    22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:31.961    22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@327 -- # local vm_dir=/root/vhost_test/vms/1
00:10:31.961    22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@329 -- # cat /root/vhost_test/vms/1/fio_socket
00:10:31.961   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1128 -- # fio_start_cmd+='--client=127.0.0.1,10101 --remote-config /root/default_integrity.job '
00:10:31.961   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1131 -- # true
00:10:31.961   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1147 -- # true
00:10:31.961   22:37:32 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1161 -- # /usr/src/fio-static/fio --eta=never --output=/root/vhost_test/fio_results/default_integrity.log --output-format=normal --client=127.0.0.1,10101 --remote-config /root/default_integrity.job
00:10:33.338  [2024-12-10 22:37:33.797370] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:10:38.610  [2024-12-10 22:37:38.767718] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:10:38.610  [2024-12-10 22:37:39.038859] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:10:42.797  [2024-12-10 22:37:43.321329] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:10:42.797  [2024-12-10 22:37:43.343948] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:10:43.055  [2024-12-10 22:37:43.619278] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:10:43.055   22:37:43 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1162 -- # sleep 1
00:10:43.992   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1164 -- # [[ normal == \j\s\o\n ]]
00:10:43.992   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1172 -- # [[ ! -n '' ]]
00:10:43.992   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1173 -- # cat /root/vhost_test/fio_results/default_integrity.log
00:10:43.992  hostname=VM-1-6-7, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-3.35, flags=1
00:10:43.992  <VM-1-6-7> nvme-host: (g=0): rw=randwrite, bs=(R) 4096B-512KiB, (W) 4096B-512KiB, (T) 4096B-512KiB, ioengine=libaio, iodepth=512
00:10:43.992  <VM-1-6-7> Starting 1 thread
00:10:43.992  <VM-1-6-7> 
00:10:43.992  nvme-host: (groupid=0, jobs=1): err= 0: pid=961: Tue Dec 10 22:37:43 2024
00:10:43.992    read: IOPS=1282, BW=215MiB/s (226MB/s)(2048MiB/9521msec)
00:10:43.992      slat (usec): min=48, max=23019, avg=3033.40, stdev=4410.57
00:10:43.992      clat (msec): min=8, max=344, avg=138.21, stdev=73.56
00:10:43.992       lat (msec): min=9, max=344, avg=141.24, stdev=73.05
00:10:43.992      clat percentiles (msec):
00:10:43.992       |  1.00th=[   14],  5.00th=[   23], 10.00th=[   48], 20.00th=[   79],
00:10:43.992       | 30.00th=[   92], 40.00th=[  113], 50.00th=[  131], 60.00th=[  148],
00:10:43.992       | 70.00th=[  174], 80.00th=[  201], 90.00th=[  241], 95.00th=[  279],
00:10:43.992       | 99.00th=[  321], 99.50th=[  330], 99.90th=[  338], 99.95th=[  338],
00:10:43.992       | 99.99th=[  342]
00:10:43.992    write: IOPS=1366, BW=229MiB/s (240MB/s)(2048MiB/8937msec); 0 zone resets
00:10:43.992      slat (usec): min=312, max=98674, avg=22305.23, stdev=16166.49
00:10:43.992      clat (msec): min=6, max=300, avg=123.10, stdev=67.63
00:10:43.992       lat (msec): min=7, max=358, avg=145.40, stdev=71.92
00:10:43.992      clat percentiles (msec):
00:10:43.992       |  1.00th=[   12],  5.00th=[   22], 10.00th=[   29], 20.00th=[   66],
00:10:43.992       | 30.00th=[   83], 40.00th=[   97], 50.00th=[  113], 60.00th=[  136],
00:10:43.992       | 70.00th=[  159], 80.00th=[  186], 90.00th=[  220], 95.00th=[  247],
00:10:43.992       | 99.00th=[  279], 99.50th=[  300], 99.90th=[  300], 99.95th=[  300],
00:10:43.992       | 99.99th=[  300]
00:10:43.992     bw (  KiB/s): min= 7368, max=464680, per=99.30%, avg=233016.89, stdev=126341.53, samples=18
00:10:43.992     iops        : min=   30, max= 2048, avg=1356.44, stdev=683.20, samples=18
00:10:43.992    lat (msec)   : 10=0.48%, 20=4.31%, 50=8.03%, 100=24.71%, 250=55.94%
00:10:43.992    lat (msec)   : 500=6.52%
00:10:43.992    cpu          : usr=92.73%, sys=2.20%, ctx=510, majf=0, minf=34
00:10:43.992    IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.5%, >=64=99.1%
00:10:43.992       submit    : 0=0.0%, 4=0.0%, 8=1.2%, 16=0.0%, 32=0.0%, 64=19.2%, >=64=79.6%
00:10:43.992       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:10:43.992       issued rwts: total=12208,12208,0,0 short=0,0,0,0 dropped=0,0,0,0
00:10:43.992       latency   : target=0, window=0, percentile=100.00%, depth=512
00:10:43.992  
00:10:43.992  Run status group 0 (all jobs):
00:10:43.992     READ: bw=215MiB/s (226MB/s), 215MiB/s-215MiB/s (226MB/s-226MB/s), io=2048MiB (2147MB), run=9521-9521msec
00:10:43.992    WRITE: bw=229MiB/s (240MB/s), 229MiB/s-229MiB/s (240MB/s-240MB/s), io=2048MiB (2147MB), run=8937-8937msec
00:10:43.992  
00:10:43.992  Disk stats (read/write):
00:10:43.992    sdb: ios=12315/12135, merge=85/87, ticks=160951/109277, in_queue=270229, util=29.87%
00:10:43.992   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@77 -- # notice 'Shutting down virtual machine...'
00:10:43.992   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine...'
00:10:43.992   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:43.992   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:10:43.992   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:43.992   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:43.992   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:10:43.992   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine...'
00:10:43.992  INFO: Shutting down virtual machine...
00:10:43.992   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@78 -- # vm_shutdown_all
00:10:43.992   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:10:43.992   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:10:43.992    22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@489 -- # vm_list_all
00:10:43.992    22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@466 -- # vms=()
00:10:43.992    22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@466 -- # local vms
00:10:43.992    22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:10:43.992    22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:10:43.992    22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:10:43.992   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:10:43.992   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@492 -- # vm_shutdown 1
00:10:43.992   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:10:43.992   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:43.992   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:43.992   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:10:43.992   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:10:43.992   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@424 -- # vm_is_running 1
00:10:43.992   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:10:43.992   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:43.992   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:43.992   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:10:43.992   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:10:43.992   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:10:43.992    22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:10:43.992   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # vm_pid=97382
00:10:43.992   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 97382
00:10:43.992   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@380 -- # return 0
00:10:43.992   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:10:43.992   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:10:43.992   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:43.992   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:10:43.992   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:43.992   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:43.992   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:10:43.993   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:10:43.993  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:10:43.993   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@432 -- # set +e
00:10:43.993   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:10:43.993   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:10:43.993   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:43.993   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:43.993   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:10:43.993   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:10:43.993    22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:10:43.993    22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:10:43.993    22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:43.993    22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:43.993    22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:10:43.993    22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:10:43.993   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:10:43.993  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:44.252   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:10:44.252   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:10:44.252   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:44.252   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:10:44.252   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:44.252   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:44.252   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:10:44.252   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:10:44.252  INFO: VM1 is shutting down - wait a while to complete
00:10:44.252   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@435 -- # set -e
00:10:44.252   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:10:44.252   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:10:44.252   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:44.252   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:10:44.252   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:44.252   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:44.252   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:10:44.252   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:10:44.252  INFO: Waiting for VMs to shutdown...
00:10:44.252   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:10:44.252   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:10:44.252   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:10:44.252   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:10:44.252   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:44.252   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:44.252   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:10:44.252   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:10:44.252   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:10:44.252    22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:10:44.252   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # vm_pid=97382
00:10:44.252   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 97382
00:10:44.252   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@380 -- # return 0
00:10:44.252   22:37:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:10:45.188   22:37:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:10:45.188   22:37:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:10:45.188   22:37:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:10:45.188   22:37:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:10:45.188   22:37:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:45.188   22:37:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:45.188   22:37:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:10:45.188   22:37:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:10:45.188   22:37:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:10:45.188    22:37:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:10:45.188   22:37:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # vm_pid=97382
00:10:45.188   22:37:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 97382
00:10:45.188   22:37:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@380 -- # return 0
00:10:45.188   22:37:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:10:46.566   22:37:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:10:46.566   22:37:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:10:46.566   22:37:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:10:46.566   22:37:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:10:46.566   22:37:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:46.566   22:37:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:46.566   22:37:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:10:46.566   22:37:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:10:46.566   22:37:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@373 -- # return 1
00:10:46.566   22:37:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:10:46.566   22:37:46 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:10:47.502   22:37:47 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:10:47.502   22:37:47 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:10:47.502   22:37:47 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:10:47.502   22:37:47 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:10:47.502   22:37:47 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:47.502   22:37:47 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:10:47.502   22:37:47 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:47.502   22:37:47 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:47.502   22:37:47 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:10:47.502   22:37:47 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:10:47.502  INFO: All VMs successfully shut down
00:10:47.502   22:37:47 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@505 -- # return 0
00:10:47.503   22:37:47 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@81 -- # vm_setup --disk-type=vfio_user_virtio --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1
00:10:47.503   22:37:47 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@518 -- # xtrace_disable
00:10:47.503   22:37:47 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:10:47.503  WARN: removing existing VM in '/root/vhost_test/vms/1'
00:10:47.503  INFO: Creating new VM in /root/vhost_test/vms/1
00:10:47.503  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:10:47.503  INFO: TASK MASK: 6-7
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@671 -- # local node_num=0
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@672 -- # local boot_disk_present=false
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:10:47.503  INFO: NUMA NODE: 0
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@677 -- # [[ -n '' ]]
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@686 -- # [[ -z '' ]]
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@701 -- # IFS=,
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@701 -- # read -r disk disk_type _
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@702 -- # [[ -z '' ]]
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@702 -- # disk_type=vfio_user_virtio
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@704 -- # case $disk_type in
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@766 -- # notice 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:10:47.503  INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@767 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/vfu_tgt/virtio.$disk")
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@768 -- # [[ 1 == '' ]]
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@780 -- # [[ -n '' ]]
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@785 -- # (( 0 ))
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:10:47.503  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@787 -- # cat
00:10:47.503    22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/vfu_tgt/virtio.1
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@827 -- # echo 10100
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@828 -- # echo 10101
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@829 -- # echo 10102
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@832 -- # [[ -z '' ]]
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@834 -- # echo 10104
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@835 -- # echo 101
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@837 -- # [[ -z '' ]]
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@838 -- # [[ -z '' ]]
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@82 -- # vm_run 1
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@843 -- # local run_all=false
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@844 -- # local vms_to_run=
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@846 -- # getopts a-: optchar
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@856 -- # false
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@859 -- # shift 0
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@860 -- # for vm in "$@"
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@861 -- # vm_num_is_valid 1
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@871 -- # vm_is_running 1
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@373 -- # return 1
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:10:47.503   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:47.504   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:10:47.504   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:47.504   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:47.504   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:10:47.504   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:10:47.504  INFO: running /root/vhost_test/vms/1/run.sh
00:10:47.504   22:37:48 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:10:47.504  Running VM in /root/vhost_test/vms/1
00:10:47.762  [2024-12-10 22:37:48.427593] tgt_endpoint.c: 167:tgt_accept_poller: *NOTICE*: /root/vhost_test/vms/vfu_tgt/virtio.1: attached successfully
00:10:47.762  Waiting for QEMU pid file
00:10:49.139  === qemu.log ===
00:10:49.139  === qemu.log ===
00:10:49.139   22:37:49 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@83 -- # vm_wait_for_boot 60 1
00:10:49.139   22:37:49 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@913 -- # assert_number 60
00:10:49.139   22:37:49 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:10:49.139   22:37:49 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@281 -- # return 0
00:10:49.139   22:37:49 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@915 -- # xtrace_disable
00:10:49.139   22:37:49 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:10:49.139  INFO: Waiting for VMs to boot
00:10:49.139  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:11:04.023  [2024-12-10 22:38:03.152577] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:11:10.585  
00:11:10.585  INFO: VM1 ready
00:11:10.585  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:11:10.585  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:11:11.523  INFO: all VMs ready
00:11:11.523   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@973 -- # return 0
00:11:11.523   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@86 -- # disks_after_restart=
00:11:11.523   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@87 -- # get_disks virtio_scsi 1
00:11:11.523   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@24 -- # [[ virtio_scsi == \v\i\r\t\i\o\_\s\c\s\i ]]
00:11:11.523   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@25 -- # vm_check_scsi_location 1
00:11:11.523   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1014 -- # local 'script=shopt -s nullglob;
00:11:11.523  	for entry in /sys/block/sd*; do
00:11:11.523  		disk_type="$(cat $entry/device/vendor)";
00:11:11.523  		if [[ $disk_type == INTEL* ]] || [[ $disk_type == RAWSCSI* ]] || [[ $disk_type == LIO-ORG* ]]; then
00:11:11.523  			fname=$(basename $entry);
00:11:11.523  			echo -n " $fname";
00:11:11.523  		fi;
00:11:11.523  	done'
00:11:11.523    22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1016 -- # echo 'shopt -s nullglob;
00:11:11.523  	for entry in /sys/block/sd*; do
00:11:11.523  		disk_type="$(cat $entry/device/vendor)";
00:11:11.523  		if [[ $disk_type == INTEL* ]] || [[ $disk_type == RAWSCSI* ]] || [[ $disk_type == LIO-ORG* ]]; then
00:11:11.523  			fname=$(basename $entry);
00:11:11.523  			echo -n " $fname";
00:11:11.523  		fi;
00:11:11.523  	done'
00:11:11.523    22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1016 -- # vm_exec 1 bash -s
00:11:11.523    22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:11:11.523    22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:11.523    22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:11.523    22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:11:11.523    22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:11:11.523     22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:11:11.523     22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:11:11.523     22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:11.523     22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:11.523     22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:11:11.523     22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:11:11.523    22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 bash -s
00:11:11.523  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:11:11.786   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1016 -- # SCSI_DISK=' sdb'
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1018 -- # [[ -z  sdb ]]
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@88 -- # disks_after_restart=' sdb'
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@90 -- # [[  sdb != \ \s\d\b ]]
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@96 -- # notice 'Shutting down virtual machine...'
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine...'
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine...'
00:11:11.787  INFO: Shutting down virtual machine...
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@97 -- # vm_shutdown_all
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:11:11.787    22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@489 -- # vm_list_all
00:11:11.787    22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@466 -- # vms=()
00:11:11.787    22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@466 -- # local vms
00:11:11.787    22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:11:11.787    22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:11:11.787    22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@492 -- # vm_shutdown 1
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@424 -- # vm_is_running 1
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:11:11.787    22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # vm_pid=106991
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 106991
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@380 -- # return 0
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:11:11.787  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@432 -- # set +e
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:11:11.787    22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:11:11.787    22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:11:11.787    22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:11.787    22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:11.787    22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:11:11.787    22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:11:11.787   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:11:11.787  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:11:12.046   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:11:12.046   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:11:12.046   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:12.046   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:11:12.046   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:12.046   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:12.046   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:11:12.046   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:11:12.046  INFO: VM1 is shutting down - wait a while to complete
00:11:12.046   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@435 -- # set -e
00:11:12.046   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:11:12.046   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:11:12.046   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:12.046   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:11:12.046   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:12.046   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:12.046   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:11:12.046   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:11:12.046  INFO: Waiting for VMs to shutdown...
00:11:12.046   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:11:12.046   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:11:12.046   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:11:12.046   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:11:12.046   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:12.046   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:12.046   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:11:12.046   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:11:12.046   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:11:12.046    22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:11:12.046   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # vm_pid=106991
00:11:12.046   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 106991
00:11:12.046   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@380 -- # return 0
00:11:12.046   22:38:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:11:12.983   22:38:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:11:12.983   22:38:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:11:12.983   22:38:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:11:12.983   22:38:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:11:12.983   22:38:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:12.983   22:38:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:12.983   22:38:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:11:12.983   22:38:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:11:12.983   22:38:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:11:12.983    22:38:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:11:12.983   22:38:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # vm_pid=106991
00:11:12.983   22:38:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 106991
00:11:12.983   22:38:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@380 -- # return 0
00:11:12.983   22:38:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:11:13.922   22:38:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:11:13.922   22:38:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:11:13.922   22:38:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:11:13.922   22:38:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:11:13.922   22:38:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:13.922   22:38:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:13.922   22:38:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:11:13.922   22:38:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:11:13.922   22:38:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@373 -- # return 1
00:11:13.922   22:38:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:11:13.922   22:38:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:11:15.309   22:38:15 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:11:15.309   22:38:15 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:11:15.309   22:38:15 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:11:15.309   22:38:15 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:11:15.309   22:38:15 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:15.309   22:38:15 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:11:15.309   22:38:15 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:15.309   22:38:15 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:15.309   22:38:15 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:11:15.309   22:38:15 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:11:15.309  INFO: All VMs successfully shut down
00:11:15.309   22:38:15 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@505 -- # return 0
00:11:15.309   22:38:15 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@99 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_detach_controller Nvme0
00:11:15.309  [2024-12-10 22:38:15.892909] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (Nvme0n1) received event(SPDK_BDEV_EVENT_REMOVE)
00:11:16.687   22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@101 -- # vhost_kill 0
00:11:16.687   22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@202 -- # local rc=0
00:11:16.687   22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@203 -- # local vhost_name=0
00:11:16.687   22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@205 -- # [[ -z 0 ]]
00:11:16.687   22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@210 -- # local vhost_dir
00:11:16.687    22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@211 -- # get_vhost_dir 0
00:11:16.687    22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:11:16.687    22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:11:16.687    22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:11:16.687   22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0
00:11:16.687   22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:11:16.687   22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]]
00:11:16.687   22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@219 -- # timing_enter vhost_kill
00:11:16.687   22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@726 -- # xtrace_disable
00:11:16.687   22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:11:16.687   22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@220 -- # local vhost_pid
00:11:16.687    22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@221 -- # cat /root/vhost_test/vhost/0/vhost.pid
00:11:16.687   22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@221 -- # vhost_pid=96308
00:11:16.687   22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@222 -- # notice 'killing vhost (PID 96308) app'
00:11:16.687   22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'killing vhost (PID 96308) app'
00:11:16.687   22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:16.687   22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:11:16.687   22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:16.687   22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:16.687   22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:11:16.687   22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: killing vhost (PID 96308) app'
00:11:16.687  INFO: killing vhost (PID 96308) app
00:11:16.687   22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@224 -- # kill -INT 96308
00:11:16.687   22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@225 -- # notice 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:11:16.687   22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:11:16.687   22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:16.687   22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:11:16.687   22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:16.687   22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:16.687   22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:11:16.687   22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: sent SIGINT to vhost app - waiting 60 seconds to exit'
00:11:16.687  INFO: sent SIGINT to vhost app - waiting 60 seconds to exit
00:11:16.687   22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@226 -- # (( i = 0 ))
00:11:16.687   22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:11:16.687   22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@227 -- # kill -0 96308
00:11:16.687   22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@228 -- # echo .
00:11:16.687  .
00:11:16.687   22:38:17 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@229 -- # sleep 1
00:11:17.627   22:38:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@226 -- # (( i++ ))
00:11:17.627   22:38:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:11:17.627   22:38:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@227 -- # kill -0 96308
00:11:17.627   22:38:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@228 -- # echo .
00:11:17.627  .
00:11:17.627   22:38:18 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@229 -- # sleep 1
00:11:18.561   22:38:19 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@226 -- # (( i++ ))
00:11:18.561   22:38:19 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:11:18.820   22:38:19 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@227 -- # kill -0 96308
00:11:18.820   22:38:19 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@228 -- # echo .
00:11:18.820  .
00:11:18.820   22:38:19 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@229 -- # sleep 1
00:11:19.757   22:38:20 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@226 -- # (( i++ ))
00:11:19.757   22:38:20 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:11:19.757   22:38:20 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@227 -- # kill -0 96308
00:11:19.757  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 227: kill: (96308) - No such process
00:11:19.757   22:38:20 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@231 -- # break
00:11:19.757   22:38:20 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@234 -- # kill -0 96308
00:11:19.757  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 234: kill: (96308) - No such process
00:11:19.757   22:38:20 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@239 -- # kill -0 96308
00:11:19.757  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 239: kill: (96308) - No such process
00:11:19.757   22:38:20 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@245 -- # is_pid_child 96308
00:11:19.757   22:38:20 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1686 -- # local pid=96308 _pid
00:11:19.757   22:38:20 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1688 -- # read -r _pid
00:11:19.757    22:38:20 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1685 -- # jobs -pr
00:11:19.757   22:38:20 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1689 -- # (( pid == _pid ))
00:11:19.757   22:38:20 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1688 -- # read -r _pid
00:11:19.757   22:38:20 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1692 -- # return 1
00:11:19.757   22:38:20 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@257 -- # timing_exit vhost_kill
00:11:19.757   22:38:20 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@732 -- # xtrace_disable
00:11:19.757   22:38:20 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:11:19.757   22:38:20 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@259 -- # rm -rf /root/vhost_test/vhost/0
00:11:19.757   22:38:20 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@261 -- # return 0
00:11:19.757   22:38:20 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@103 -- # vhosttestfini
00:11:19.757   22:38:20 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@54 -- # '[' '' == iso ']'
00:11:19.757  
00:11:19.757  real	1m32.226s
00:11:19.757  user	5m59.883s
00:11:19.757  sys	0m2.184s
00:11:19.757   22:38:20 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:19.757   22:38:20 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:11:19.757  ************************************
00:11:19.757  END TEST vfio_user_virtio_scsi_restart_vm
00:11:19.757  ************************************
00:11:19.757   22:38:20 vfio_user_qemu -- vfio_user/vfio_user.sh@19 -- # run_test vfio_user_virtio_bdevperf /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/initiator_bdevperf.sh
00:11:19.757   22:38:20 vfio_user_qemu -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:11:19.757   22:38:20 vfio_user_qemu -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:19.757   22:38:20 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x
00:11:19.757  ************************************
00:11:19.757  START TEST vfio_user_virtio_bdevperf
00:11:19.757  ************************************
00:11:19.757   22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/initiator_bdevperf.sh
00:11:19.757  * Looking for test storage...
00:11:19.757  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:11:19.757    22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:11:19.757     22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version
00:11:19.757     22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:11:20.017    22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:11:20.017    22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:11:20.017    22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l
00:11:20.017    22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l
00:11:20.017    22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@336 -- # IFS=.-:
00:11:20.017    22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@336 -- # read -ra ver1
00:11:20.017    22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@337 -- # IFS=.-:
00:11:20.017    22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@337 -- # read -ra ver2
00:11:20.017    22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@338 -- # local 'op=<'
00:11:20.017    22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@340 -- # ver1_l=2
00:11:20.017    22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@341 -- # ver2_l=1
00:11:20.017    22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:11:20.017    22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@344 -- # case "$op" in
00:11:20.017    22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@345 -- # : 1
00:11:20.017    22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@364 -- # (( v = 0 ))
00:11:20.017    22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:20.017     22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@365 -- # decimal 1
00:11:20.017     22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@353 -- # local d=1
00:11:20.017     22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:20.017     22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@355 -- # echo 1
00:11:20.017    22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1
00:11:20.017     22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@366 -- # decimal 2
00:11:20.017     22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@353 -- # local d=2
00:11:20.017     22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:20.017     22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@355 -- # echo 2
00:11:20.017    22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2
00:11:20.017    22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:11:20.017    22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:11:20.017    22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@368 -- # return 0
00:11:20.017    22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:20.017    22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:11:20.017  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:20.017  		--rc genhtml_branch_coverage=1
00:11:20.017  		--rc genhtml_function_coverage=1
00:11:20.017  		--rc genhtml_legend=1
00:11:20.017  		--rc geninfo_all_blocks=1
00:11:20.017  		--rc geninfo_unexecuted_blocks=1
00:11:20.017  		
00:11:20.017  		'
00:11:20.017    22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:11:20.017  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:20.017  		--rc genhtml_branch_coverage=1
00:11:20.017  		--rc genhtml_function_coverage=1
00:11:20.017  		--rc genhtml_legend=1
00:11:20.017  		--rc geninfo_all_blocks=1
00:11:20.017  		--rc geninfo_unexecuted_blocks=1
00:11:20.017  		
00:11:20.017  		'
00:11:20.017    22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:11:20.017  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:20.017  		--rc genhtml_branch_coverage=1
00:11:20.017  		--rc genhtml_function_coverage=1
00:11:20.017  		--rc genhtml_legend=1
00:11:20.017  		--rc geninfo_all_blocks=1
00:11:20.017  		--rc geninfo_unexecuted_blocks=1
00:11:20.017  		
00:11:20.017  		'
00:11:20.017    22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:11:20.017  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:20.017  		--rc genhtml_branch_coverage=1
00:11:20.017  		--rc genhtml_function_coverage=1
00:11:20.017  		--rc genhtml_legend=1
00:11:20.017  		--rc geninfo_all_blocks=1
00:11:20.017  		--rc geninfo_unexecuted_blocks=1
00:11:20.017  		
00:11:20.017  		'
00:11:20.017   22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@9 -- # rpc_py=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:11:20.017   22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@11 -- # vfu_dir=/tmp/vfu_devices
00:11:20.017   22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@12 -- # rm -rf /tmp/vfu_devices
00:11:20.017   22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@13 -- # mkdir -p /tmp/vfu_devices
00:11:20.017   22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@17 -- # spdk_tgt_pid=112651
00:11:20.017   22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@16 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0xf -L vfu_virtio
00:11:20.017   22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@18 -- # waitforlisten 112651
00:11:20.017   22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 112651 ']'
00:11:20.017   22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:20.017   22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:20.017   22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:20.017  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:20.017   22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:20.017   22:38:20 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:11:20.017  [2024-12-10 22:38:20.664135] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:11:20.017  [2024-12-10 22:38:20.664242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112651 ]
00:11:20.017  EAL: No free 2048 kB hugepages reported on node 1
00:11:20.017  [2024-12-10 22:38:20.794743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:11:20.277  [2024-12-10 22:38:20.949365] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:11:20.277  [2024-12-10 22:38:20.949447] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:11:20.277  [2024-12-10 22:38:20.949501] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:11:20.277  [2024-12-10 22:38:20.949514] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:11:21.211   22:38:21 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:21.211   22:38:21 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@868 -- # return 0
00:11:21.211   22:38:21 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create -b malloc0 64 512
00:11:21.779  malloc0
00:11:21.779   22:38:22 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create -b malloc1 64 512
00:11:22.038  malloc1
00:11:22.038   22:38:22 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@22 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create -b malloc2 64 512
00:11:22.297  malloc2
00:11:22.297   22:38:22 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@24 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_tgt_set_base_path /tmp/vfu_devices
00:11:22.556   22:38:23 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@27 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_virtio_create_blk_endpoint vfu.blk --bdev-name malloc0 --cpumask=0x1 --num-queues=2 --qsize=256 --packed-ring
00:11:22.556  [2024-12-10 22:38:23.298996] vfu_virtio.c:1533:vfu_virtio_endpoint_setup: *DEBUG*: mmap file /tmp/vfu_devices/vfu.blk_bar4, devmem_fd 470
00:11:22.556  [2024-12-10 22:38:23.299066] vfu_virtio.c:1695:vfu_virtio_get_device_info: *DEBUG*: /tmp/vfu_devices/vfu.blk: get device information, fd 470
00:11:22.556  [2024-12-10 22:38:23.299247] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.blk: get vendor capability, idx 0
00:11:22.556  [2024-12-10 22:38:23.299284] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.blk: get vendor capability, idx 1
00:11:22.556  [2024-12-10 22:38:23.299298] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.blk: get vendor capability, idx 2
00:11:22.556  [2024-12-10 22:38:23.299314] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.blk: get vendor capability, idx 3
00:11:22.556   22:38:23 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_virtio_create_scsi_endpoint vfu.scsi --cpumask 0x2 --num-io-queues=2 --qsize=256 --packed-ring
00:11:22.814  [2024-12-10 22:38:23.523945] vfu_virtio.c:1533:vfu_virtio_endpoint_setup: *DEBUG*: mmap file /tmp/vfu_devices/vfu.scsi_bar4, devmem_fd 574
00:11:22.814  [2024-12-10 22:38:23.523987] vfu_virtio.c:1695:vfu_virtio_get_device_info: *DEBUG*: /tmp/vfu_devices/vfu.scsi: get device information, fd 574
00:11:22.814  [2024-12-10 22:38:23.524059] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.scsi: get vendor capability, idx 0
00:11:22.814  [2024-12-10 22:38:23.524092] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.scsi: get vendor capability, idx 1
00:11:22.814  [2024-12-10 22:38:23.524105] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.scsi: get vendor capability, idx 2
00:11:22.814  [2024-12-10 22:38:23.524121] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.scsi: get vendor capability, idx 3
00:11:22.814   22:38:23 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@33 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_virtio_scsi_add_target vfu.scsi --scsi-target-num=0 --bdev-name malloc1
00:11:23.073  [2024-12-10 22:38:23.740883] vfu_virtio_scsi.c: 886:vfu_virtio_scsi_add_target: *NOTICE*: vfu.scsi: added SCSI target 0 using bdev 'malloc1'
00:11:23.073   22:38:23 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_virtio_scsi_add_target vfu.scsi --scsi-target-num=1 --bdev-name malloc2
00:11:23.331  [2024-12-10 22:38:23.953773] vfu_virtio_scsi.c: 886:vfu_virtio_scsi_add_target: *NOTICE*: vfu.scsi: added SCSI target 1 using bdev 'malloc2'
00:11:23.331   22:38:23 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@37 -- # bdevperf=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/examples/bdevperf
00:11:23.331   22:38:23 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@38 -- # bdevperf_rpc_sock=/tmp/bdevperf.sock
00:11:23.331   22:38:23 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@41 -- # bdevperf_pid=113275
00:11:23.331   22:38:23 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@40 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/examples/bdevperf -r /tmp/bdevperf.sock -g -s 2048 -q 256 -o 4096 -w randrw -M 50 -t 30 -m 0xf0 -L vfio_pci -L virtio_vfio_user
00:11:23.331   22:38:23 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@42 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT
00:11:23.331   22:38:23 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@43 -- # waitforlisten 113275 /tmp/bdevperf.sock
00:11:23.331   22:38:23 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 113275 ']'
00:11:23.331   22:38:23 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/bdevperf.sock
00:11:23.331   22:38:23 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:23.331   22:38:23 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/bdevperf.sock...'
00:11:23.331  Waiting for process to start up and listen on UNIX domain socket /tmp/bdevperf.sock...
00:11:23.331   22:38:23 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:23.331   22:38:23 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:11:23.331  [2024-12-10 22:38:24.059555] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:11:23.331  [2024-12-10 22:38:24.059661] [ DPDK EAL parameters: bdevperf --no-shconf -c 0xf0 -m 2048 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113275 ]
00:11:23.589  EAL: No free 2048 kB hugepages reported on node 1
00:11:24.157  [2024-12-10 22:38:24.869155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:11:24.416  [2024-12-10 22:38:24.986888] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5
00:11:24.416  [2024-12-10 22:38:24.986966] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6
00:11:24.416  [2024-12-10 22:38:24.986990] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4
00:11:24.416  [2024-12-10 22:38:24.986995] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7
00:11:24.983   22:38:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:24.984   22:38:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@868 -- # return 0
00:11:24.984   22:38:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@44 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /tmp/bdevperf.sock bdev_virtio_attach_controller --dev-type scsi --trtype vfio-user --traddr /tmp/vfu_devices/vfu.scsi VirtioScsi0
00:11:25.244  [2024-12-10 22:38:25.837479] tgt_endpoint.c: 167:tgt_accept_poller: *NOTICE*: /tmp/vfu_devices/vfu.scsi: attached successfully
00:11:25.244  [2024-12-10 22:38:25.839690] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:11:25.244  [2024-12-10 22:38:25.840672] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:11:25.244  [2024-12-10 22:38:25.841655] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:11:25.244  [2024-12-10 22:38:25.842679] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:11:25.244  [2024-12-10 22:38:25.843705] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x4000, Offset 0x0, Flags 0xf, Cap offset 32
00:11:25.244  [2024-12-10 22:38:25.843807] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x3000, Map addr 0x7f3ca979f000
00:11:25.244  [2024-12-10 22:38:25.844694] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:11:25.244  [2024-12-10 22:38:25.845670] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:11:25.244  [2024-12-10 22:38:25.846675] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:11:25.244  [2024-12-10 22:38:25.847701] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:11:25.244  [2024-12-10 22:38:25.848688] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:11:25.244  [2024-12-10 22:38:25.850354] vfio_user_pci.c:  65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x80000000
00:11:25.244  [2024-12-10 22:38:25.860640] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /tmp/vfu_devices/vfu.scsi Setup Successfully
00:11:25.244  [2024-12-10 22:38:25.861820] virtio_vfio_user.c:  32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0x0, length 0x4
00:11:25.244  [2024-12-10 22:38:25.862792] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x2000-0x2003, len = 4
00:11:25.244  [2024-12-10 22:38:25.862848] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 0
00:11:25.244  [2024-12-10 22:38:25.863788] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x14-0x14, len = 1
00:11:25.244  [2024-12-10 22:38:25.863815] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_STATUS with 0x0
00:11:25.244  [2024-12-10 22:38:25.863836] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 0, set status 0
00:11:25.244  [2024-12-10 22:38:25.863852] vfu_virtio.c: 190:vfu_virtio_dev_reset: *DEBUG*: device vfu.scsi resetting
00:11:25.244  [2024-12-10 22:38:25.864801] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:11:25.244  [2024-12-10 22:38:25.864822] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x0
00:11:25.244  [2024-12-10 22:38:25.864855] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 0
00:11:25.244  [2024-12-10 22:38:25.865803] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:11:25.244  [2024-12-10 22:38:25.865821] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x0
00:11:25.244  [2024-12-10 22:38:25.865883] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 0
00:11:25.244  [2024-12-10 22:38:25.865931] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 1
00:11:25.244  [2024-12-10 22:38:25.866813] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x14-0x14, len = 1
00:11:25.244  [2024-12-10 22:38:25.866832] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_STATUS with 0x1
00:11:25.244  [2024-12-10 22:38:25.866843] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 0, set status 1
00:11:25.244  [2024-12-10 22:38:25.867834] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:11:25.244  [2024-12-10 22:38:25.867854] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x1
00:11:25.244  [2024-12-10 22:38:25.867892] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 1
00:11:25.244  [2024-12-10 22:38:25.868831] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:11:25.244  [2024-12-10 22:38:25.868845] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x1
00:11:25.244  [2024-12-10 22:38:25.868882] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 1
00:11:25.244  [2024-12-10 22:38:25.868909] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 3
00:11:25.244  [2024-12-10 22:38:25.869836] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x14-0x14, len = 1
00:11:25.244  [2024-12-10 22:38:25.869851] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_STATUS with 0x3
00:11:25.244  [2024-12-10 22:38:25.869864] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 1, set status 3
00:11:25.244  [2024-12-10 22:38:25.870843] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:11:25.244  [2024-12-10 22:38:25.870864] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x3
00:11:25.244  [2024-12-10 22:38:25.870891] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 3
00:11:25.244  [2024-12-10 22:38:25.871855] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x0-0x3, len = 4
00:11:25.244  [2024-12-10 22:38:25.871874] vfu_virtio.c: 937:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_DFSELECT with 0x0
00:11:25.244  [2024-12-10 22:38:25.872859] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x4-0x7, len = 4
00:11:25.244  [2024-12-10 22:38:25.872879] vfu_virtio.c:1072:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_DF_LO with 0x10000007
00:11:25.244  [2024-12-10 22:38:25.873862] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x0-0x3, len = 4
00:11:25.244  [2024-12-10 22:38:25.873881] vfu_virtio.c: 937:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_DFSELECT with 0x1
00:11:25.244  [2024-12-10 22:38:25.874866] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x4-0x7, len = 4
00:11:25.244  [2024-12-10 22:38:25.874890] vfu_virtio.c:1067:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_DF_HI with 0x5
00:11:25.244  [2024-12-10 22:38:25.874924] virtio_vfio_user.c: 127:virtio_vfio_user_get_features: *DEBUG*: feature_hi 0x5, feature_low 0x10000007
00:11:25.244  [2024-12-10 22:38:25.875881] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x8-0xB, len = 4
00:11:25.244  [2024-12-10 22:38:25.875899] vfu_virtio.c: 943:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_GFSELECT with 0x0
00:11:25.244  [2024-12-10 22:38:25.876889] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0xC-0xF, len = 4
00:11:25.244  [2024-12-10 22:38:25.876908] vfu_virtio.c: 956:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_GF_LO with 0x3
00:11:25.244  [2024-12-10 22:38:25.876921] vfu_virtio.c: 255:virtio_dev_set_features: *DEBUG*: vfu.scsi: negotiated features 0x3
00:11:25.244  [2024-12-10 22:38:25.877903] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x8-0xB, len = 4
00:11:25.244  [2024-12-10 22:38:25.877918] vfu_virtio.c: 943:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_GFSELECT with 0x1
00:11:25.244  [2024-12-10 22:38:25.878913] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0xC-0xF, len = 4
00:11:25.244  [2024-12-10 22:38:25.878928] vfu_virtio.c: 951:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_GF_HI with 0x1
00:11:25.244  [2024-12-10 22:38:25.878948] vfu_virtio.c: 255:virtio_dev_set_features: *DEBUG*: vfu.scsi: negotiated features 0x100000003
00:11:25.244  [2024-12-10 22:38:25.878981] virtio_vfio_user.c: 176:virtio_vfio_user_set_features: *DEBUG*: features 0x100000003
00:11:25.244  [2024-12-10 22:38:25.879919] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:11:25.244  [2024-12-10 22:38:25.879938] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x3
00:11:25.244  [2024-12-10 22:38:25.879970] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 3
00:11:25.244  [2024-12-10 22:38:25.879989] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status b
00:11:25.244  [2024-12-10 22:38:25.880930] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x14-0x14, len = 1
00:11:25.244  [2024-12-10 22:38:25.880948] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_STATUS with 0xb
00:11:25.244  [2024-12-10 22:38:25.880959] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 3, set status b
00:11:25.244  [2024-12-10 22:38:25.881952] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:11:25.244  [2024-12-10 22:38:25.881967] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0xb
00:11:25.244  [2024-12-10 22:38:25.882004] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status b
00:11:25.244  [2024-12-10 22:38:25.882951] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:11:25.244  [2024-12-10 22:38:25.882966] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x0
00:11:25.244  [2024-12-10 22:38:25.883964] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x18-0x19, len = 2
00:11:25.244  [2024-12-10 22:38:25.883984] vfu_virtio.c:1135:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ queue 0 PCI_COMMON_Q_SIZE with 0x100
00:11:25.244  [2024-12-10 22:38:25.884021] virtio_vfio_user.c: 216:virtio_vfio_user_get_queue_size: *DEBUG*: queue 0, size 256
00:11:25.244  [2024-12-10 22:38:25.884970] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:11:25.244  [2024-12-10 22:38:25.884985] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x0
00:11:25.244  [2024-12-10 22:38:25.885982] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x20-0x23, len = 4
00:11:25.244  [2024-12-10 22:38:25.885998] vfu_virtio.c:1020:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 0 PCI_COMMON_Q_DESCLO with 0x69aec000
00:11:25.244  [2024-12-10 22:38:25.886987] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x24-0x27, len = 4
00:11:25.244  [2024-12-10 22:38:25.887003] vfu_virtio.c:1025:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 0 PCI_COMMON_Q_DESCHI with 0x2000
00:11:25.244  [2024-12-10 22:38:25.887994] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x28-0x2B, len = 4
00:11:25.244  [2024-12-10 22:38:25.888009] vfu_virtio.c:1030:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 0 PCI_COMMON_Q_AVAILLO with 0x69aed000
00:11:25.245  [2024-12-10 22:38:25.889004] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x2C-0x2F, len = 4
00:11:25.245  [2024-12-10 22:38:25.889019] vfu_virtio.c:1035:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 0 PCI_COMMON_Q_AVAILHI with 0x2000
00:11:25.245  [2024-12-10 22:38:25.890018] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x30-0x33, len = 4
00:11:25.245  [2024-12-10 22:38:25.890033] vfu_virtio.c:1040:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 0 PCI_COMMON_Q_USEDLO with 0x69aee000
00:11:25.245  [2024-12-10 22:38:25.891018] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x34-0x37, len = 4
00:11:25.245  [2024-12-10 22:38:25.891034] vfu_virtio.c:1045:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 0 PCI_COMMON_Q_USEDHI with 0x2000
00:11:25.245  [2024-12-10 22:38:25.892028] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x1E-0x1F, len = 2
00:11:25.245  [2024-12-10 22:38:25.892043] vfu_virtio.c:1123:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_Q_NOFF with 0x0
00:11:25.245  [2024-12-10 22:38:25.893038] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:11:25.245  [2024-12-10 22:38:25.893053] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x1
00:11:25.245  [2024-12-10 22:38:25.893068] vfu_virtio.c: 267:virtio_dev_enable_vq: *DEBUG*: vfu.scsi: enable vq 0
00:11:25.245  [2024-12-10 22:38:25.893079] vfu_virtio.c:  71:virtio_dev_map_vq: *DEBUG*: vfu.scsi: try to map vq 0
00:11:25.245  [2024-12-10 22:38:25.893108] vfu_virtio.c: 107:virtio_dev_map_vq: *DEBUG*: vfu.scsi: map vq 0 successfully
00:11:25.245  [2024-12-10 22:38:25.893156] virtio_vfio_user.c: 331:virtio_vfio_user_setup_queue: *DEBUG*: queue 0 addresses:
00:11:25.245  [2024-12-10 22:38:25.893190] virtio_vfio_user.c: 332:virtio_vfio_user_setup_queue: *DEBUG*: 	 desc_addr: 200069aec000
00:11:25.245  [2024-12-10 22:38:25.893206] virtio_vfio_user.c: 333:virtio_vfio_user_setup_queue: *DEBUG*: 	 aval_addr: 200069aed000
00:11:25.245  [2024-12-10 22:38:25.893219] virtio_vfio_user.c: 334:virtio_vfio_user_setup_queue: *DEBUG*: 	 used_addr: 200069aee000
00:11:25.245  [2024-12-10 22:38:25.894044] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:11:25.245  [2024-12-10 22:38:25.894063] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x1
00:11:25.245  [2024-12-10 22:38:25.895051] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x18-0x19, len = 2
00:11:25.245  [2024-12-10 22:38:25.895073] vfu_virtio.c:1135:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ queue 1 PCI_COMMON_Q_SIZE with 0x100
00:11:25.245  [2024-12-10 22:38:25.895142] virtio_vfio_user.c: 216:virtio_vfio_user_get_queue_size: *DEBUG*: queue 1, size 256
00:11:25.245  [2024-12-10 22:38:25.896063] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:11:25.245  [2024-12-10 22:38:25.896107] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x1
00:11:25.245  [2024-12-10 22:38:25.897089] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x20-0x23, len = 4
00:11:25.245  [2024-12-10 22:38:25.897122] vfu_virtio.c:1020:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 1 PCI_COMMON_Q_DESCLO with 0x69ae8000
00:11:25.245  [2024-12-10 22:38:25.898094] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x24-0x27, len = 4
00:11:25.245  [2024-12-10 22:38:25.898124] vfu_virtio.c:1025:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 1 PCI_COMMON_Q_DESCHI with 0x2000
00:11:25.245  [2024-12-10 22:38:25.899130] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x28-0x2B, len = 4
00:11:25.245  [2024-12-10 22:38:25.899149] vfu_virtio.c:1030:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 1 PCI_COMMON_Q_AVAILLO with 0x69ae9000
00:11:25.245  [2024-12-10 22:38:25.900115] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x2C-0x2F, len = 4
00:11:25.245  [2024-12-10 22:38:25.900156] vfu_virtio.c:1035:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 1 PCI_COMMON_Q_AVAILHI with 0x2000
00:11:25.245  [2024-12-10 22:38:25.901122] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x30-0x33, len = 4
00:11:25.245  [2024-12-10 22:38:25.901152] vfu_virtio.c:1040:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 1 PCI_COMMON_Q_USEDLO with 0x69aea000
00:11:25.245  [2024-12-10 22:38:25.902135] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x34-0x37, len = 4
00:11:25.245  [2024-12-10 22:38:25.902165] vfu_virtio.c:1045:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 1 PCI_COMMON_Q_USEDHI with 0x2000
00:11:25.245  [2024-12-10 22:38:25.903149] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x1E-0x1F, len = 2
00:11:25.245  [2024-12-10 22:38:25.903179] vfu_virtio.c:1123:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_Q_NOFF with 0x1
00:11:25.245  [2024-12-10 22:38:25.904161] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:11:25.245  [2024-12-10 22:38:25.904197] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x1
00:11:25.245  [2024-12-10 22:38:25.904208] vfu_virtio.c: 267:virtio_dev_enable_vq: *DEBUG*: vfu.scsi: enable vq 1
00:11:25.245  [2024-12-10 22:38:25.904220] vfu_virtio.c:  71:virtio_dev_map_vq: *DEBUG*: vfu.scsi: try to map vq 1
00:11:25.245  [2024-12-10 22:38:25.904232] vfu_virtio.c: 107:virtio_dev_map_vq: *DEBUG*: vfu.scsi: map vq 1 successfully
00:11:25.245  [2024-12-10 22:38:25.904279] virtio_vfio_user.c: 331:virtio_vfio_user_setup_queue: *DEBUG*: queue 1 addresses:
00:11:25.245  [2024-12-10 22:38:25.904340] virtio_vfio_user.c: 332:virtio_vfio_user_setup_queue: *DEBUG*: 	 desc_addr: 200069ae8000
00:11:25.245  [2024-12-10 22:38:25.904359] virtio_vfio_user.c: 333:virtio_vfio_user_setup_queue: *DEBUG*: 	 aval_addr: 200069ae9000
00:11:25.245  [2024-12-10 22:38:25.904375] virtio_vfio_user.c: 334:virtio_vfio_user_setup_queue: *DEBUG*: 	 used_addr: 200069aea000
00:11:25.245  [2024-12-10 22:38:25.905172] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:11:25.245  [2024-12-10 22:38:25.905200] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x2
00:11:25.245  [2024-12-10 22:38:25.906190] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x18-0x19, len = 2
00:11:25.245  [2024-12-10 22:38:25.906220] vfu_virtio.c:1135:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ queue 2 PCI_COMMON_Q_SIZE with 0x100
00:11:25.245  [2024-12-10 22:38:25.906262] virtio_vfio_user.c: 216:virtio_vfio_user_get_queue_size: *DEBUG*: queue 2, size 256
00:11:25.245  [2024-12-10 22:38:25.907207] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:11:25.245  [2024-12-10 22:38:25.907234] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x2
00:11:25.245  [2024-12-10 22:38:25.908218] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x20-0x23, len = 4
00:11:25.245  [2024-12-10 22:38:25.908245] vfu_virtio.c:1020:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 2 PCI_COMMON_Q_DESCLO with 0x69ae4000
00:11:25.245  [2024-12-10 22:38:25.909222] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x24-0x27, len = 4
00:11:25.245  [2024-12-10 22:38:25.909250] vfu_virtio.c:1025:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 2 PCI_COMMON_Q_DESCHI with 0x2000
00:11:25.245  [2024-12-10 22:38:25.910227] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x28-0x2B, len = 4
00:11:25.245  [2024-12-10 22:38:25.910253] vfu_virtio.c:1030:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 2 PCI_COMMON_Q_AVAILLO with 0x69ae5000
00:11:25.245  [2024-12-10 22:38:25.911252] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x2C-0x2F, len = 4
00:11:25.245  [2024-12-10 22:38:25.911267] vfu_virtio.c:1035:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 2 PCI_COMMON_Q_AVAILHI with 0x2000
00:11:25.245  [2024-12-10 22:38:25.912241] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x30-0x33, len = 4
00:11:25.245  [2024-12-10 22:38:25.912268] vfu_virtio.c:1040:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 2 PCI_COMMON_Q_USEDLO with 0x69ae6000
00:11:25.245  [2024-12-10 22:38:25.913253] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x34-0x37, len = 4
00:11:25.245  [2024-12-10 22:38:25.913281] vfu_virtio.c:1045:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 2 PCI_COMMON_Q_USEDHI with 0x2000
00:11:25.245  [2024-12-10 22:38:25.914261] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x1E-0x1F, len = 2
00:11:25.245  [2024-12-10 22:38:25.914287] vfu_virtio.c:1123:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_Q_NOFF with 0x2
00:11:25.245  [2024-12-10 22:38:25.915264] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:11:25.245  [2024-12-10 22:38:25.915291] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x1
00:11:25.245  [2024-12-10 22:38:25.915304] vfu_virtio.c: 267:virtio_dev_enable_vq: *DEBUG*: vfu.scsi: enable vq 2
00:11:25.245  [2024-12-10 22:38:25.915313] vfu_virtio.c:  71:virtio_dev_map_vq: *DEBUG*: vfu.scsi: try to map vq 2
00:11:25.245  [2024-12-10 22:38:25.915329] vfu_virtio.c: 107:virtio_dev_map_vq: *DEBUG*: vfu.scsi: map vq 2 successfully
00:11:25.245  [2024-12-10 22:38:25.915386] virtio_vfio_user.c: 331:virtio_vfio_user_setup_queue: *DEBUG*: queue 2 addresses:
00:11:25.245  [2024-12-10 22:38:25.915427] virtio_vfio_user.c: 332:virtio_vfio_user_setup_queue: *DEBUG*: 	 desc_addr: 200069ae4000
00:11:25.245  [2024-12-10 22:38:25.915451] virtio_vfio_user.c: 333:virtio_vfio_user_setup_queue: *DEBUG*: 	 aval_addr: 200069ae5000
00:11:25.245  [2024-12-10 22:38:25.915467] virtio_vfio_user.c: 334:virtio_vfio_user_setup_queue: *DEBUG*: 	 used_addr: 200069ae6000
00:11:25.245  [2024-12-10 22:38:25.916271] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:11:25.245  [2024-12-10 22:38:25.916305] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x3
00:11:25.245  [2024-12-10 22:38:25.917280] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x18-0x19, len = 2
00:11:25.245  [2024-12-10 22:38:25.917318] vfu_virtio.c:1135:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ queue 3 PCI_COMMON_Q_SIZE with 0x100
00:11:25.245  [2024-12-10 22:38:25.917378] virtio_vfio_user.c: 216:virtio_vfio_user_get_queue_size: *DEBUG*: queue 3, size 256
00:11:25.245  [2024-12-10 22:38:25.918291] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:11:25.245  [2024-12-10 22:38:25.918320] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x3
00:11:25.245  [2024-12-10 22:38:25.919314] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x20-0x23, len = 4
00:11:25.245  [2024-12-10 22:38:25.919344] vfu_virtio.c:1020:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 3 PCI_COMMON_Q_DESCLO with 0x69ae0000
00:11:25.245  [2024-12-10 22:38:25.920316] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x24-0x27, len = 4
00:11:25.245  [2024-12-10 22:38:25.920346] vfu_virtio.c:1025:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 3 PCI_COMMON_Q_DESCHI with 0x2000
00:11:25.245  [2024-12-10 22:38:25.921318] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x28-0x2B, len = 4
00:11:25.245  [2024-12-10 22:38:25.921348] vfu_virtio.c:1030:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 3 PCI_COMMON_Q_AVAILLO with 0x69ae1000
00:11:25.245  [2024-12-10 22:38:25.922326] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x2C-0x2F, len = 4
00:11:25.245  [2024-12-10 22:38:25.922356] vfu_virtio.c:1035:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 3 PCI_COMMON_Q_AVAILHI with 0x2000
00:11:25.245  [2024-12-10 22:38:25.923338] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x30-0x33, len = 4
00:11:25.245  [2024-12-10 22:38:25.923368] vfu_virtio.c:1040:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 3 PCI_COMMON_Q_USEDLO with 0x69ae2000
00:11:25.246  [2024-12-10 22:38:25.924347] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x34-0x37, len = 4
00:11:25.246  [2024-12-10 22:38:25.924378] vfu_virtio.c:1045:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 3 PCI_COMMON_Q_USEDHI with 0x2000
00:11:25.246  [2024-12-10 22:38:25.925351] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x1E-0x1F, len = 2
00:11:25.246  [2024-12-10 22:38:25.925388] vfu_virtio.c:1123:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_Q_NOFF with 0x3
00:11:25.246  [2024-12-10 22:38:25.926368] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:11:25.246  [2024-12-10 22:38:25.926397] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x1
00:11:25.246  [2024-12-10 22:38:25.926408] vfu_virtio.c: 267:virtio_dev_enable_vq: *DEBUG*: vfu.scsi: enable vq 3
00:11:25.246  [2024-12-10 22:38:25.926420] vfu_virtio.c:  71:virtio_dev_map_vq: *DEBUG*: vfu.scsi: try to map vq 3
00:11:25.246  [2024-12-10 22:38:25.926432] vfu_virtio.c: 107:virtio_dev_map_vq: *DEBUG*: vfu.scsi: map vq 3 successfully
00:11:25.246  [2024-12-10 22:38:25.926479] virtio_vfio_user.c: 331:virtio_vfio_user_setup_queue: *DEBUG*: queue 3 addresses:
00:11:25.246  [2024-12-10 22:38:25.926534] virtio_vfio_user.c: 332:virtio_vfio_user_setup_queue: *DEBUG*: 	 desc_addr: 200069ae0000
00:11:25.246  [2024-12-10 22:38:25.926572] virtio_vfio_user.c: 333:virtio_vfio_user_setup_queue: *DEBUG*: 	 aval_addr: 200069ae1000
00:11:25.246  [2024-12-10 22:38:25.926593] virtio_vfio_user.c: 334:virtio_vfio_user_setup_queue: *DEBUG*: 	 used_addr: 200069ae2000
00:11:25.246  [2024-12-10 22:38:25.927380] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:11:25.246  [2024-12-10 22:38:25.927405] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0xb
00:11:25.246  [2024-12-10 22:38:25.927467] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status b
00:11:25.246  [2024-12-10 22:38:25.927519] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status f
00:11:25.246  [2024-12-10 22:38:25.928392] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x14-0x14, len = 1
00:11:25.246  [2024-12-10 22:38:25.928418] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_STATUS with 0xf
00:11:25.246  [2024-12-10 22:38:25.928431] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status b, set status f
00:11:25.246  [2024-12-10 22:38:25.928442] vfu_virtio.c:1365:vfu_virtio_dev_start: *DEBUG*: start vfu.scsi
00:11:25.246  [2024-12-10 22:38:25.931211] vfu_virtio.c:1377:vfu_virtio_dev_start: *DEBUG*: vfu.scsi is started with ret 0
00:11:25.246  [2024-12-10 22:38:25.932296] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:11:25.246  [2024-12-10 22:38:25.932329] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0xf
00:11:25.246  [2024-12-10 22:38:25.932381] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status f
00:11:25.246  VirtioScsi0t0 VirtioScsi0t1
00:11:25.246   22:38:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@46 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /tmp/bdevperf.sock bdev_virtio_attach_controller --dev-type blk --trtype vfio-user --traddr /tmp/vfu_devices/vfu.blk VirtioBlk0
00:11:25.505  [2024-12-10 22:38:26.168120] tgt_endpoint.c: 167:tgt_accept_poller: *NOTICE*: /tmp/vfu_devices/vfu.blk: attached successfully
00:11:25.505  [2024-12-10 22:38:26.170280] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:11:25.505  [2024-12-10 22:38:26.171282] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:11:25.505  [2024-12-10 22:38:26.172256] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:11:25.505  [2024-12-10 22:38:26.173319] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:11:25.505  [2024-12-10 22:38:26.174280] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x4000, Offset 0x0, Flags 0xf, Cap offset 32
00:11:25.505  [2024-12-10 22:38:26.174308] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x3000, Map addr 0x7f3ca971b000
00:11:25.505  [2024-12-10 22:38:26.175351] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:11:25.505  [2024-12-10 22:38:26.176320] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:11:25.505  [2024-12-10 22:38:26.177373] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:11:25.505  [2024-12-10 22:38:26.178369] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:11:25.505  [2024-12-10 22:38:26.179363] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:11:25.505  [2024-12-10 22:38:26.180969] vfio_user_pci.c:  65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x80000000
00:11:25.505  [2024-12-10 22:38:26.191170] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user1, Path /tmp/vfu_devices/vfu.blk Setup Successfully
00:11:25.505  [2024-12-10 22:38:26.192504] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 0
00:11:25.505  [2024-12-10 22:38:26.193478] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x14-0x14, len = 1
00:11:25.505  [2024-12-10 22:38:26.193517] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_STATUS with 0x0
00:11:25.505  [2024-12-10 22:38:26.193535] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 0, set status 0
00:11:25.505  [2024-12-10 22:38:26.193548] vfu_virtio.c: 190:vfu_virtio_dev_reset: *DEBUG*: device vfu.blk resetting
00:11:25.505  [2024-12-10 22:38:26.194476] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:11:25.505  [2024-12-10 22:38:26.194508] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x0
00:11:25.505  [2024-12-10 22:38:26.194545] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 0
00:11:25.505  [2024-12-10 22:38:26.195490] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:11:25.505  [2024-12-10 22:38:26.195519] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x0
00:11:25.505  [2024-12-10 22:38:26.195549] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 0
00:11:25.505  [2024-12-10 22:38:26.195576] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 1
00:11:25.505  [2024-12-10 22:38:26.196493] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x14-0x14, len = 1
00:11:25.505  [2024-12-10 22:38:26.196523] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_STATUS with 0x1
00:11:25.505  [2024-12-10 22:38:26.196538] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 0, set status 1
00:11:25.505  [2024-12-10 22:38:26.197497] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:11:25.505  [2024-12-10 22:38:26.197530] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x1
00:11:25.505  [2024-12-10 22:38:26.197587] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 1
00:11:25.505  [2024-12-10 22:38:26.198514] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:11:25.505  [2024-12-10 22:38:26.198544] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x1
00:11:25.505  [2024-12-10 22:38:26.198603] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 1
00:11:25.505  [2024-12-10 22:38:26.198659] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 3
00:11:25.505  [2024-12-10 22:38:26.199522] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x14-0x14, len = 1
00:11:25.505  [2024-12-10 22:38:26.199552] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_STATUS with 0x3
00:11:25.505  [2024-12-10 22:38:26.199563] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 1, set status 3
00:11:25.505  [2024-12-10 22:38:26.200544] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:11:25.505  [2024-12-10 22:38:26.200573] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x3
00:11:25.505  [2024-12-10 22:38:26.200620] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 3
00:11:25.505  [2024-12-10 22:38:26.201560] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x0-0x3, len = 4
00:11:25.505  [2024-12-10 22:38:26.201588] vfu_virtio.c: 937:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_DFSELECT with 0x0
00:11:25.505  [2024-12-10 22:38:26.202577] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x4-0x7, len = 4
00:11:25.505  [2024-12-10 22:38:26.202604] vfu_virtio.c:1072:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_DF_LO with 0x10007646
00:11:25.505  [2024-12-10 22:38:26.203598] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x0-0x3, len = 4
00:11:25.505  [2024-12-10 22:38:26.203625] vfu_virtio.c: 937:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_DFSELECT with 0x1
00:11:25.505  [2024-12-10 22:38:26.204598] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x4-0x7, len = 4
00:11:25.505  [2024-12-10 22:38:26.204625] vfu_virtio.c:1067:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_DF_HI with 0x5
00:11:25.505  [2024-12-10 22:38:26.204670] virtio_vfio_user.c: 127:virtio_vfio_user_get_features: *DEBUG*: feature_hi 0x5, feature_low 0x10007646
00:11:25.505  [2024-12-10 22:38:26.205606] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x8-0xB, len = 4
00:11:25.505  [2024-12-10 22:38:26.205633] vfu_virtio.c: 943:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_GFSELECT with 0x0
00:11:25.505  [2024-12-10 22:38:26.206617] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0xC-0xF, len = 4
00:11:25.505  [2024-12-10 22:38:26.206645] vfu_virtio.c: 956:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_GF_LO with 0x3446
00:11:25.505  [2024-12-10 22:38:26.206660] vfu_virtio.c: 255:virtio_dev_set_features: *DEBUG*: vfu.blk: negotiated features 0x3446
00:11:25.505  [2024-12-10 22:38:26.207621] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x8-0xB, len = 4
00:11:25.505  [2024-12-10 22:38:26.207652] vfu_virtio.c: 943:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_GFSELECT with 0x1
00:11:25.505  [2024-12-10 22:38:26.208646] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0xC-0xF, len = 4
00:11:25.505  [2024-12-10 22:38:26.208666] vfu_virtio.c: 951:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_GF_HI with 0x1
00:11:25.505  [2024-12-10 22:38:26.208681] vfu_virtio.c: 255:virtio_dev_set_features: *DEBUG*: vfu.blk: negotiated features 0x100003446
00:11:25.505  [2024-12-10 22:38:26.208742] virtio_vfio_user.c: 176:virtio_vfio_user_set_features: *DEBUG*: features 0x100003446
00:11:25.505  [2024-12-10 22:38:26.209645] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:11:25.505  [2024-12-10 22:38:26.209673] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x3
00:11:25.505  [2024-12-10 22:38:26.209716] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 3
00:11:25.505  [2024-12-10 22:38:26.209792] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status b
00:11:25.505  [2024-12-10 22:38:26.210656] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x14-0x14, len = 1
00:11:25.505  [2024-12-10 22:38:26.210683] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_STATUS with 0xb
00:11:25.505  [2024-12-10 22:38:26.210699] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 3, set status b
00:11:25.505  [2024-12-10 22:38:26.211659] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:11:25.505  [2024-12-10 22:38:26.211693] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0xb
00:11:25.505  [2024-12-10 22:38:26.211740] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status b
00:11:25.505  [2024-12-10 22:38:26.211823] virtio_vfio_user.c:  32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0x22, length 0x2
00:11:25.505  [2024-12-10 22:38:26.212667] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x2022-0x2023, len = 2
00:11:25.505  [2024-12-10 22:38:26.212741] virtio_vfio_user.c:  32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0x14, length 0x4
00:11:25.505  [2024-12-10 22:38:26.213670] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x2014-0x2017, len = 4
00:11:25.505  [2024-12-10 22:38:26.213775] virtio_vfio_user.c:  32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0x0, length 0x8
00:11:25.505  [2024-12-10 22:38:26.214680] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x2000-0x2007, len = 8
00:11:25.505  [2024-12-10 22:38:26.214782] virtio_vfio_user.c:  32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0x22, length 0x2
00:11:25.506  [2024-12-10 22:38:26.215698] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x2022-0x2023, len = 2
00:11:25.506  [2024-12-10 22:38:26.215806] virtio_vfio_user.c:  32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0x8, length 0x4
00:11:25.506  [2024-12-10 22:38:26.216709] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x2008-0x200B, len = 4
00:11:25.506  [2024-12-10 22:38:26.216806] virtio_vfio_user.c:  32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0xc, length 0x4
00:11:25.506  [2024-12-10 22:38:26.217733] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x200C-0x200F, len = 4
00:11:25.506  [2024-12-10 22:38:26.218740] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x16-0x17, len = 2
00:11:25.506  [2024-12-10 22:38:26.218789] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_SELECT with 0x0
00:11:25.506  [2024-12-10 22:38:26.219756] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x18-0x19, len = 2
00:11:25.506  [2024-12-10 22:38:26.219799] vfu_virtio.c:1135:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ queue 0 PCI_COMMON_Q_SIZE with 0x100
00:11:25.506  [2024-12-10 22:38:26.219860] virtio_vfio_user.c: 216:virtio_vfio_user_get_queue_size: *DEBUG*: queue 0, size 256
00:11:25.506  [2024-12-10 22:38:26.220779] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x16-0x17, len = 2
00:11:25.506  [2024-12-10 22:38:26.220801] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_SELECT with 0x0
00:11:25.506  [2024-12-10 22:38:26.221780] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x20-0x23, len = 4
00:11:25.506  [2024-12-10 22:38:26.221811] vfu_virtio.c:1020:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 0 PCI_COMMON_Q_DESCLO with 0x69adc000
00:11:25.506  [2024-12-10 22:38:26.222802] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x24-0x27, len = 4
00:11:25.506  [2024-12-10 22:38:26.222824] vfu_virtio.c:1025:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 0 PCI_COMMON_Q_DESCHI with 0x2000
00:11:25.506  [2024-12-10 22:38:26.223806] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x28-0x2B, len = 4
00:11:25.506  [2024-12-10 22:38:26.223825] vfu_virtio.c:1030:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 0 PCI_COMMON_Q_AVAILLO with 0x69add000
00:11:25.506  [2024-12-10 22:38:26.224819] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x2C-0x2F, len = 4
00:11:25.506  [2024-12-10 22:38:26.224840] vfu_virtio.c:1035:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 0 PCI_COMMON_Q_AVAILHI with 0x2000
00:11:25.506  [2024-12-10 22:38:26.225831] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x30-0x33, len = 4
00:11:25.506  [2024-12-10 22:38:26.225853] vfu_virtio.c:1040:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 0 PCI_COMMON_Q_USEDLO with 0x69ade000
00:11:25.506  [2024-12-10 22:38:26.226851] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x34-0x37, len = 4
00:11:25.506  [2024-12-10 22:38:26.226870] vfu_virtio.c:1045:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 0 PCI_COMMON_Q_USEDHI with 0x2000
00:11:25.506  [2024-12-10 22:38:26.227865] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x1E-0x1F, len = 2
00:11:25.506  [2024-12-10 22:38:26.227886] vfu_virtio.c:1123:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_Q_NOFF with 0x0
00:11:25.506  [2024-12-10 22:38:26.228871] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x1C-0x1D, len = 2
00:11:25.506  [2024-12-10 22:38:26.228893] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_ENABLE with 0x1
00:11:25.506  [2024-12-10 22:38:26.228908] vfu_virtio.c: 267:virtio_dev_enable_vq: *DEBUG*: vfu.blk: enable vq 0
00:11:25.506  [2024-12-10 22:38:26.228921] vfu_virtio.c:  71:virtio_dev_map_vq: *DEBUG*: vfu.blk: try to map vq 0
00:11:25.506  [2024-12-10 22:38:26.228944] vfu_virtio.c: 107:virtio_dev_map_vq: *DEBUG*: vfu.blk: map vq 0 successfully
00:11:25.506  [2024-12-10 22:38:26.229007] virtio_vfio_user.c: 331:virtio_vfio_user_setup_queue: *DEBUG*: queue 0 addresses:
00:11:25.506  [2024-12-10 22:38:26.229058] virtio_vfio_user.c: 332:virtio_vfio_user_setup_queue: *DEBUG*: 	 desc_addr: 200069adc000
00:11:25.506  [2024-12-10 22:38:26.229080] virtio_vfio_user.c: 333:virtio_vfio_user_setup_queue: *DEBUG*: 	 aval_addr: 200069add000
00:11:25.506  [2024-12-10 22:38:26.229107] virtio_vfio_user.c: 334:virtio_vfio_user_setup_queue: *DEBUG*: 	 used_addr: 200069ade000
00:11:25.506  [2024-12-10 22:38:26.229881] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x16-0x17, len = 2
00:11:25.506  [2024-12-10 22:38:26.229897] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_SELECT with 0x1
00:11:25.506  [2024-12-10 22:38:26.230891] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x18-0x19, len = 2
00:11:25.506  [2024-12-10 22:38:26.230907] vfu_virtio.c:1135:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ queue 1 PCI_COMMON_Q_SIZE with 0x100
00:11:25.506  [2024-12-10 22:38:26.230954] virtio_vfio_user.c: 216:virtio_vfio_user_get_queue_size: *DEBUG*: queue 1, size 256
00:11:25.506  [2024-12-10 22:38:26.231899] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x16-0x17, len = 2
00:11:25.506  [2024-12-10 22:38:26.231914] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_SELECT with 0x1
00:11:25.506  [2024-12-10 22:38:26.232911] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x20-0x23, len = 4
00:11:25.506  [2024-12-10 22:38:26.232928] vfu_virtio.c:1020:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 1 PCI_COMMON_Q_DESCLO with 0x69ad8000
00:11:25.506  [2024-12-10 22:38:26.233927] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x24-0x27, len = 4
00:11:25.506  [2024-12-10 22:38:26.233943] vfu_virtio.c:1025:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 1 PCI_COMMON_Q_DESCHI with 0x2000
00:11:25.506  [2024-12-10 22:38:26.234940] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x28-0x2B, len = 4
00:11:25.506  [2024-12-10 22:38:26.234956] vfu_virtio.c:1030:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 1 PCI_COMMON_Q_AVAILLO with 0x69ad9000
00:11:25.506  [2024-12-10 22:38:26.235958] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x2C-0x2F, len = 4
00:11:25.506  [2024-12-10 22:38:26.235973] vfu_virtio.c:1035:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 1 PCI_COMMON_Q_AVAILHI with 0x2000
00:11:25.506  [2024-12-10 22:38:26.236965] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x30-0x33, len = 4
00:11:25.506  [2024-12-10 22:38:26.236982] vfu_virtio.c:1040:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 1 PCI_COMMON_Q_USEDLO with 0x69ada000
00:11:25.506  [2024-12-10 22:38:26.237980] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x34-0x37, len = 4
00:11:25.506  [2024-12-10 22:38:26.237997] vfu_virtio.c:1045:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 1 PCI_COMMON_Q_USEDHI with 0x2000
00:11:25.506  [2024-12-10 22:38:26.238998] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x1E-0x1F, len = 2
00:11:25.506  [2024-12-10 22:38:26.239014] vfu_virtio.c:1123:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_Q_NOFF with 0x1
00:11:25.506  [2024-12-10 22:38:26.240001] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x1C-0x1D, len = 2
00:11:25.506  [2024-12-10 22:38:26.240017] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_ENABLE with 0x1
00:11:25.506  [2024-12-10 22:38:26.240031] vfu_virtio.c: 267:virtio_dev_enable_vq: *DEBUG*: vfu.blk: enable vq 1
00:11:25.506  [2024-12-10 22:38:26.240041] vfu_virtio.c:  71:virtio_dev_map_vq: *DEBUG*: vfu.blk: try to map vq 1
00:11:25.506  [2024-12-10 22:38:26.240055] vfu_virtio.c: 107:virtio_dev_map_vq: *DEBUG*: vfu.blk: map vq 1 successfully
00:11:25.506  [2024-12-10 22:38:26.240121] virtio_vfio_user.c: 331:virtio_vfio_user_setup_queue: *DEBUG*: queue 1 addresses:
00:11:25.506  [2024-12-10 22:38:26.240165] virtio_vfio_user.c: 332:virtio_vfio_user_setup_queue: *DEBUG*: 	 desc_addr: 200069ad8000
00:11:25.506  [2024-12-10 22:38:26.240188] virtio_vfio_user.c: 333:virtio_vfio_user_setup_queue: *DEBUG*: 	 aval_addr: 200069ad9000
00:11:25.506  [2024-12-10 22:38:26.240211] virtio_vfio_user.c: 334:virtio_vfio_user_setup_queue: *DEBUG*: 	 used_addr: 200069ada000
00:11:25.506  [2024-12-10 22:38:26.241022] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:11:25.506  [2024-12-10 22:38:26.241044] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0xb
00:11:25.506  [2024-12-10 22:38:26.241103] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status b
00:11:25.506  [2024-12-10 22:38:26.241152] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status f
00:11:25.506  [2024-12-10 22:38:26.242037] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x14-0x14, len = 1
00:11:25.506  [2024-12-10 22:38:26.242057] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_STATUS with 0xf
00:11:25.506  [2024-12-10 22:38:26.242069] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status b, set status f
00:11:25.506  [2024-12-10 22:38:26.242081] vfu_virtio.c:1365:vfu_virtio_dev_start: *DEBUG*: start vfu.blk
00:11:25.506  [2024-12-10 22:38:26.244634] vfu_virtio.c:1377:vfu_virtio_dev_start: *DEBUG*: vfu.blk is started with ret 0
00:11:25.506  [2024-12-10 22:38:26.245791] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:11:25.506  [2024-12-10 22:38:26.245811] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0xf
00:11:25.506  [2024-12-10 22:38:26.245872] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status f
00:11:25.506  VirtioBlk0
00:11:25.506   22:38:26 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@50 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /tmp/bdevperf.sock perform_tests
00:11:25.764  Running I/O for 30 seconds...
00:11:27.635      75111.00 IOPS,   293.40 MiB/s
[2024-12-10T21:38:29.797Z]     74973.50 IOPS,   292.87 MiB/s
[2024-12-10T21:38:30.732Z]     74949.00 IOPS,   292.77 MiB/s
[2024-12-10T21:38:31.671Z]     74904.75 IOPS,   292.60 MiB/s
[2024-12-10T21:38:32.607Z]     74846.40 IOPS,   292.37 MiB/s
[2024-12-10T21:38:33.543Z]     74848.50 IOPS,   292.38 MiB/s
[2024-12-10T21:38:34.479Z]     74855.86 IOPS,   292.41 MiB/s
[2024-12-10T21:38:35.415Z]     74833.38 IOPS,   292.32 MiB/s
[2024-12-10T21:38:36.790Z]     74859.00 IOPS,   292.42 MiB/s
[2024-12-10T21:38:37.725Z]     74863.10 IOPS,   292.43 MiB/s
[2024-12-10T21:38:38.660Z]     74851.91 IOPS,   292.39 MiB/s
[2024-12-10T21:38:39.597Z]     74861.25 IOPS,   292.43 MiB/s
[2024-12-10T21:38:40.533Z]     74873.77 IOPS,   292.48 MiB/s
[2024-12-10T21:38:41.472Z]     74867.93 IOPS,   292.45 MiB/s
[2024-12-10T21:38:42.851Z]     74876.47 IOPS,   292.49 MiB/s
[2024-12-10T21:38:43.419Z]     74872.69 IOPS,   292.47 MiB/s
[2024-12-10T21:38:44.797Z]     74864.82 IOPS,   292.44 MiB/s
[2024-12-10T21:38:45.733Z]     74870.17 IOPS,   292.46 MiB/s
[2024-12-10T21:38:46.668Z]     74874.47 IOPS,   292.48 MiB/s
[2024-12-10T21:38:47.603Z]     74874.95 IOPS,   292.48 MiB/s
[2024-12-10T21:38:48.540Z]     74878.24 IOPS,   292.49 MiB/s
[2024-12-10T21:38:49.476Z]     74870.50 IOPS,   292.46 MiB/s
[2024-12-10T21:38:50.854Z]     74874.22 IOPS,   292.48 MiB/s
[2024-12-10T21:38:51.789Z]     74872.46 IOPS,   292.47 MiB/s
[2024-12-10T21:38:52.724Z]     74865.44 IOPS,   292.44 MiB/s
[2024-12-10T21:38:53.658Z]     74863.04 IOPS,   292.43 MiB/s
[2024-12-10T21:38:54.596Z]     74866.19 IOPS,   292.45 MiB/s
[2024-12-10T21:38:55.535Z]     74863.07 IOPS,   292.43 MiB/s
[2024-12-10T21:38:56.470Z]     74869.34 IOPS,   292.46 MiB/s
[2024-12-10T21:38:56.470Z]     74871.97 IOPS,   292.47 MiB/s
00:11:55.685                                                                                                  Latency(us)
00:11:55.685  
[2024-12-10T21:38:56.470Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:11:55.685  Job: VirtioScsi0t0 (Core Mask 0x10, workload: randrw, percentage: 50, depth: 256, IO size: 4096)
00:11:55.685  	 VirtioScsi0t0       :      30.01   17441.24      68.13       0.00     0.00   14669.27    2010.76   16443.58
00:11:55.685  Job: VirtioScsi0t1 (Core Mask 0x20, workload: randrw, percentage: 50, depth: 256, IO size: 4096)
00:11:55.685  	 VirtioScsi0t1       :      30.01   17440.75      68.13       0.00     0.00   14669.64    1995.87   16562.73
00:11:55.685  Job: VirtioBlk0 (Core Mask 0x40, workload: randrw, percentage: 50, depth: 256, IO size: 4096)
00:11:55.685  	 VirtioBlk0          :      30.01   39986.74     156.20       0.00     0.00    6396.56    1980.97    8400.52
00:11:55.685  
[2024-12-10T21:38:56.470Z]  ===================================================================================================================
00:11:55.685  
[2024-12-10T21:38:56.470Z]  Total                       :              74868.72     292.46       0.00     0.00   10251.42    1980.97   16562.73
00:11:55.685  {
00:11:55.685    "results": [
00:11:55.685      {
00:11:55.685        "job": "VirtioScsi0t0",
00:11:55.685        "core_mask": "0x10",
00:11:55.685        "workload": "randrw",
00:11:55.685        "percentage": 50,
00:11:55.685        "status": "finished",
00:11:55.685        "queue_depth": 256,
00:11:55.685        "io_size": 4096,
00:11:55.685        "runtime": 30.01278,
00:11:55.685        "iops": 17441.236699832538,
00:11:55.685        "mibps": 68.12983085872085,
00:11:55.685        "io_failed": 0,
00:11:55.685        "io_timeout": 0,
00:11:55.685        "avg_latency_us": 14669.274773239598,
00:11:55.685        "min_latency_us": 2010.7636363636364,
00:11:55.685        "max_latency_us": 16443.578181818182
00:11:55.685      },
00:11:55.685      {
00:11:55.685        "job": "VirtioScsi0t1",
00:11:55.685        "core_mask": "0x20",
00:11:55.685        "workload": "randrw",
00:11:55.685        "percentage": 50,
00:11:55.685        "status": "finished",
00:11:55.685        "queue_depth": 256,
00:11:55.685        "io_size": 4096,
00:11:55.685        "runtime": 30.012594,
00:11:55.685        "iops": 17440.745041898077,
00:11:55.685        "mibps": 68.12791031991436,
00:11:55.685        "io_failed": 0,
00:11:55.685        "io_timeout": 0,
00:11:55.685        "avg_latency_us": 14669.63702713264,
00:11:55.685        "min_latency_us": 1995.8690909090908,
00:11:55.685        "max_latency_us": 16562.734545454547
00:11:55.685      },
00:11:55.685      {
00:11:55.685        "job": "VirtioBlk0",
00:11:55.685        "core_mask": "0x40",
00:11:55.685        "workload": "randrw",
00:11:55.685        "percentage": 50,
00:11:55.685        "status": "finished",
00:11:55.685        "queue_depth": 256,
00:11:55.685        "io_size": 4096,
00:11:55.685        "runtime": 30.006221,
00:11:55.685        "iops": 39986.741416055025,
00:11:55.685        "mibps": 156.19820865646494,
00:11:55.685        "io_failed": 0,
00:11:55.685        "io_timeout": 0,
00:11:55.685        "avg_latency_us": 6396.557914037962,
00:11:55.685        "min_latency_us": 1980.9745454545455,
00:11:55.685        "max_latency_us": 8400.523636363636
00:11:55.685      }
00:11:55.685    ],
00:11:55.685    "core_count": 3
00:11:55.685  }
00:11:55.944   22:38:56 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@52 -- # killprocess 113275
00:11:55.944   22:38:56 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 113275 ']'
00:11:55.945   22:38:56 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@958 -- # kill -0 113275
00:11:55.945    22:38:56 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@959 -- # uname
00:11:55.945   22:38:56 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:11:55.945    22:38:56 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 113275
00:11:55.945   22:38:56 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_4
00:11:55.945   22:38:56 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']'
00:11:55.945   22:38:56 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 113275'
00:11:55.945  killing process with pid 113275
00:11:55.945   22:38:56 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@973 -- # kill 113275
00:11:55.945  Received shutdown signal, test time was about 30.000000 seconds
00:11:55.945  
00:11:55.945                                                                                                  Latency(us)
00:11:55.945  
[2024-12-10T21:38:56.730Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:11:55.945  
[2024-12-10T21:38:56.730Z]  ===================================================================================================================
00:11:55.945  
[2024-12-10T21:38:56.730Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:11:55.945  [2024-12-10 22:38:56.515357] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 0
00:11:55.945   22:38:56 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@978 -- # wait 113275
00:11:55.945  [2024-12-10 22:38:56.515872] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x14-0x14, len = 1
00:11:55.945  [2024-12-10 22:38:56.515908] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_STATUS with 0x0
00:11:55.945  [2024-12-10 22:38:56.515926] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status f, set status 0
00:11:55.945  [2024-12-10 22:38:56.515936] vfu_virtio.c:1388:vfu_virtio_dev_stop: *DEBUG*: stop vfu.blk
00:11:55.945  [2024-12-10 22:38:56.515957] vfu_virtio.c: 116:virtio_dev_unmap_vq: *DEBUG*: vfu.blk: unmap vq 0
00:11:55.945  [2024-12-10 22:38:56.515971] vfu_virtio.c: 116:virtio_dev_unmap_vq: *DEBUG*: vfu.blk: unmap vq 1
00:11:55.945  [2024-12-10 22:38:56.515987] vfu_virtio.c: 190:vfu_virtio_dev_reset: *DEBUG*: device vfu.blk resetting
00:11:55.945  [2024-12-10 22:38:56.516861] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:11:55.945  [2024-12-10 22:38:56.516891] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x0
00:11:55.945  [2024-12-10 22:38:56.516917] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 0
00:11:55.945  [2024-12-10 22:38:56.517864] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x16-0x17, len = 2
00:11:55.945  [2024-12-10 22:38:56.517886] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_SELECT with 0x0
00:11:55.945  [2024-12-10 22:38:56.518868] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x1C-0x1D, len = 2
00:11:55.945  [2024-12-10 22:38:56.518887] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_ENABLE with 0x0
00:11:55.945  [2024-12-10 22:38:56.518900] vfu_virtio.c: 301:virtio_dev_disable_vq: *DEBUG*: vfu.blk: disable vq 0
00:11:55.945  [2024-12-10 22:38:56.518916] vfu_virtio.c: 305:virtio_dev_disable_vq: *NOTICE*: Queue 0 isn't enabled
00:11:55.945  [2024-12-10 22:38:56.519876] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x16-0x17, len = 2
00:11:55.945  [2024-12-10 22:38:56.519895] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_SELECT with 0x1
00:11:55.945  [2024-12-10 22:38:56.520888] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x1C-0x1D, len = 2
00:11:55.945  [2024-12-10 22:38:56.520907] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_ENABLE with 0x0
00:11:55.945  [2024-12-10 22:38:56.520917] vfu_virtio.c: 301:virtio_dev_disable_vq: *DEBUG*: vfu.blk: disable vq 1
00:11:55.945  [2024-12-10 22:38:56.520931] vfu_virtio.c: 305:virtio_dev_disable_vq: *NOTICE*: Queue 1 isn't enabled
00:11:55.945  [2024-12-10 22:38:56.520974] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /tmp/vfu_devices/vfu.blk
00:11:55.945  [2024-12-10 22:38:56.523860] vfio_user_pci.c:  96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x80000000
00:11:55.945  [2024-12-10 22:38:56.555624] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 0
00:11:55.945  [2024-12-10 22:38:56.556281] vfu_virtio.c:1388:vfu_virtio_dev_stop: *DEBUG*: stop vfu.blk
00:11:55.945  [2024-12-10 22:38:56.556309] vfu_virtio.c:1391:vfu_virtio_dev_stop: *DEBUG*: vfu.blk isn't started
00:11:55.945  [2024-12-10 22:38:56.556320] vfu_virtio.c: 190:vfu_virtio_dev_reset: *DEBUG*: device vfu.blk resetting
00:11:55.945  [2024-12-10 22:38:56.556344] vfu_virtio.c:1416:vfu_virtio_detach_device: *DEBUG*: detach device vfu.blk
00:11:55.945  [2024-12-10 22:38:56.556356] vfu_virtio.c:1388:vfu_virtio_dev_stop: *DEBUG*: stop vfu.blk
00:11:55.945  [2024-12-10 22:38:56.556369] vfu_virtio.c:1391:vfu_virtio_dev_stop: *DEBUG*: vfu.blk isn't started
00:11:55.945  [2024-12-10 22:38:56.556437] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x14-0x14, len = 1
00:11:55.945  [2024-12-10 22:38:56.556479] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_STATUS with 0x0
00:11:55.945  [2024-12-10 22:38:56.556494] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status f, set status 0
00:11:55.945  [2024-12-10 22:38:56.556507] vfu_virtio.c:1388:vfu_virtio_dev_stop: *DEBUG*: stop vfu.scsi
00:11:55.945  [2024-12-10 22:38:56.556529] vfu_virtio.c: 116:virtio_dev_unmap_vq: *DEBUG*: vfu.scsi: unmap vq 0
00:11:55.945  [2024-12-10 22:38:56.556546] vfu_virtio.c: 116:virtio_dev_unmap_vq: *DEBUG*: vfu.scsi: unmap vq 1
00:11:55.945  [2024-12-10 22:38:56.556556] vfu_virtio.c: 116:virtio_dev_unmap_vq: *DEBUG*: vfu.scsi: unmap vq 2
00:11:55.945  [2024-12-10 22:38:56.556569] vfu_virtio.c: 116:virtio_dev_unmap_vq: *DEBUG*: vfu.scsi: unmap vq 3
00:11:55.945  [2024-12-10 22:38:56.556578] vfu_virtio.c: 190:vfu_virtio_dev_reset: *DEBUG*: device vfu.scsi resetting
00:11:55.945  [2024-12-10 22:38:56.557436] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:11:55.945  [2024-12-10 22:38:56.557472] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x0
00:11:55.945  [2024-12-10 22:38:56.557504] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 0
00:11:55.945  [2024-12-10 22:38:56.558441] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:11:55.945  [2024-12-10 22:38:56.558471] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x0
00:11:55.945  [2024-12-10 22:38:56.559452] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:11:55.945  [2024-12-10 22:38:56.559480] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x0
00:11:55.945  [2024-12-10 22:38:56.559499] vfu_virtio.c: 301:virtio_dev_disable_vq: *DEBUG*: vfu.scsi: disable vq 0
00:11:55.945  [2024-12-10 22:38:56.559509] vfu_virtio.c: 305:virtio_dev_disable_vq: *NOTICE*: Queue 0 isn't enabled
00:11:55.945  [2024-12-10 22:38:56.560459] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:11:55.945  [2024-12-10 22:38:56.560488] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x1
00:11:55.945  [2024-12-10 22:38:56.561459] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:11:55.945  [2024-12-10 22:38:56.561488] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x0
00:11:55.945  [2024-12-10 22:38:56.561501] vfu_virtio.c: 301:virtio_dev_disable_vq: *DEBUG*: vfu.scsi: disable vq 1
00:11:55.945  [2024-12-10 22:38:56.561511] vfu_virtio.c: 305:virtio_dev_disable_vq: *NOTICE*: Queue 1 isn't enabled
00:11:55.945  [2024-12-10 22:38:56.562467] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:11:55.945  [2024-12-10 22:38:56.562495] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x2
00:11:55.945  [2024-12-10 22:38:56.563469] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:11:55.945  [2024-12-10 22:38:56.563496] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x0
00:11:55.945  [2024-12-10 22:38:56.563509] vfu_virtio.c: 301:virtio_dev_disable_vq: *DEBUG*: vfu.scsi: disable vq 2
00:11:55.945  [2024-12-10 22:38:56.563518] vfu_virtio.c: 305:virtio_dev_disable_vq: *NOTICE*: Queue 2 isn't enabled
00:11:55.945  [2024-12-10 22:38:56.564478] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:11:55.945  [2024-12-10 22:38:56.564507] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x3
00:11:55.945  [2024-12-10 22:38:56.565490] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:11:55.945  [2024-12-10 22:38:56.565519] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x0
00:11:55.945  [2024-12-10 22:38:56.565534] vfu_virtio.c: 301:virtio_dev_disable_vq: *DEBUG*: vfu.scsi: disable vq 3
00:11:55.945  [2024-12-10 22:38:56.565543] vfu_virtio.c: 305:virtio_dev_disable_vq: *NOTICE*: Queue 3 isn't enabled
00:11:55.945  [2024-12-10 22:38:56.565615] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /tmp/vfu_devices/vfu.scsi
00:11:55.945  [2024-12-10 22:38:56.568436] vfio_user_pci.c:  96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x80000000
00:11:55.945  [2024-12-10 22:38:56.599837] vfu_virtio.c:1388:vfu_virtio_dev_stop: *DEBUG*: stop vfu.scsi
00:11:55.945  [2024-12-10 22:38:56.599859] vfu_virtio.c:1391:vfu_virtio_dev_stop: *DEBUG*: vfu.scsi isn't started
00:11:55.945  [2024-12-10 22:38:56.599872] vfu_virtio.c: 190:vfu_virtio_dev_reset: *DEBUG*: device vfu.scsi resetting
00:11:55.945  [2024-12-10 22:38:56.599895] vfu_virtio.c:1416:vfu_virtio_detach_device: *DEBUG*: detach device vfu.scsi
00:11:55.945  [2024-12-10 22:38:56.599910] vfu_virtio.c:1388:vfu_virtio_dev_stop: *DEBUG*: stop vfu.scsi
00:11:55.945  [2024-12-10 22:38:56.599919] vfu_virtio.c:1391:vfu_virtio_dev_stop: *DEBUG*: vfu.scsi isn't started
00:12:00.137   22:39:00 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@53 -- # trap - SIGINT SIGTERM EXIT
00:12:00.137   22:39:00 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_virtio_delete_endpoint vfu.blk
00:12:00.137  [2024-12-10 22:39:00.431542] tgt_endpoint.c: 701:spdk_vfu_delete_endpoint: *NOTICE*: Destruct endpoint vfu.blk
00:12:00.137   22:39:00 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@57 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_virtio_delete_endpoint vfu.scsi
00:12:00.137  [2024-12-10 22:39:00.664490] tgt_endpoint.c: 701:spdk_vfu_delete_endpoint: *NOTICE*: Destruct endpoint vfu.scsi
00:12:00.137   22:39:00 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@59 -- # killprocess 112651
00:12:00.137   22:39:00 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 112651 ']'
00:12:00.137   22:39:00 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@958 -- # kill -0 112651
00:12:00.137    22:39:00 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@959 -- # uname
00:12:00.137   22:39:00 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:12:00.137    22:39:00 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112651
00:12:00.137   22:39:00 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:12:00.137   22:39:00 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:12:00.137   22:39:00 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112651'
00:12:00.137  killing process with pid 112651
00:12:00.137   22:39:00 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@973 -- # kill 112651
00:12:00.137   22:39:00 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@978 -- # wait 112651
00:12:03.425  
00:12:03.425  real	0m43.595s
00:12:03.425  user	5m3.745s
00:12:03.425  sys	0m2.405s
00:12:03.425   22:39:04 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:03.425   22:39:04 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:12:03.425  ************************************
00:12:03.425  END TEST vfio_user_virtio_bdevperf
00:12:03.425  ************************************
00:12:03.425   22:39:04 vfio_user_qemu -- vfio_user/vfio_user.sh@20 -- # [[ y == y ]]
00:12:03.425   22:39:04 vfio_user_qemu -- vfio_user/vfio_user.sh@21 -- # run_test vfio_user_virtio_fs_fio /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_fs.sh
00:12:03.425   22:39:04 vfio_user_qemu -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:03.425   22:39:04 vfio_user_qemu -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:03.425   22:39:04 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x
00:12:03.425  ************************************
00:12:03.425  START TEST vfio_user_virtio_fs_fio
00:12:03.425  ************************************
00:12:03.425   22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_fs.sh
00:12:03.425  * Looking for test storage...
00:12:03.425  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:12:03.425    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:12:03.425     22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1711 -- # lcov --version
00:12:03.425     22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:12:03.425    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:12:03.425    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:03.425    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:03.425    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:03.425    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@336 -- # IFS=.-:
00:12:03.425    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@336 -- # read -ra ver1
00:12:03.425    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@337 -- # IFS=.-:
00:12:03.425    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@337 -- # read -ra ver2
00:12:03.425    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@338 -- # local 'op=<'
00:12:03.425    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@340 -- # ver1_l=2
00:12:03.425    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@341 -- # ver2_l=1
00:12:03.425    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:03.425    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@344 -- # case "$op" in
00:12:03.425    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@345 -- # : 1
00:12:03.425    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:03.425    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:03.425     22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@365 -- # decimal 1
00:12:03.425     22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@353 -- # local d=1
00:12:03.425     22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:03.425     22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@355 -- # echo 1
00:12:03.425    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@365 -- # ver1[v]=1
00:12:03.425     22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@366 -- # decimal 2
00:12:03.426     22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@353 -- # local d=2
00:12:03.426     22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:03.426     22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@355 -- # echo 2
00:12:03.426    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@366 -- # ver2[v]=2
00:12:03.426    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:03.426    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:03.426    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@368 -- # return 0
00:12:03.426    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:03.426    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:12:03.426  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:03.426  		--rc genhtml_branch_coverage=1
00:12:03.426  		--rc genhtml_function_coverage=1
00:12:03.426  		--rc genhtml_legend=1
00:12:03.426  		--rc geninfo_all_blocks=1
00:12:03.426  		--rc geninfo_unexecuted_blocks=1
00:12:03.426  		
00:12:03.426  		'
00:12:03.426    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:12:03.426  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:03.426  		--rc genhtml_branch_coverage=1
00:12:03.426  		--rc genhtml_function_coverage=1
00:12:03.426  		--rc genhtml_legend=1
00:12:03.426  		--rc geninfo_all_blocks=1
00:12:03.426  		--rc geninfo_unexecuted_blocks=1
00:12:03.426  		
00:12:03.426  		'
00:12:03.426    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:12:03.426  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:03.426  		--rc genhtml_branch_coverage=1
00:12:03.426  		--rc genhtml_function_coverage=1
00:12:03.426  		--rc genhtml_legend=1
00:12:03.426  		--rc geninfo_all_blocks=1
00:12:03.426  		--rc geninfo_unexecuted_blocks=1
00:12:03.426  		
00:12:03.426  		'
00:12:03.426    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:12:03.426  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:03.426  		--rc genhtml_branch_coverage=1
00:12:03.426  		--rc genhtml_function_coverage=1
00:12:03.426  		--rc genhtml_legend=1
00:12:03.426  		--rc geninfo_all_blocks=1
00:12:03.426  		--rc geninfo_unexecuted_blocks=1
00:12:03.426  		
00:12:03.426  		'
00:12:03.426   22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh
00:12:03.426    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/common.sh@6 -- # : 128
00:12:03.426    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/common.sh@7 -- # : 512
00:12:03.426    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:12:03.426     22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@6 -- # : false
00:12:03.426     22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@7 -- # : /root/vhost_test
00:12:03.426     22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@8 -- # : /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:12:03.426     22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@9 -- # : qemu-img
00:12:03.426      22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:12:03.426     22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:12:03.426     22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:12:03.426     22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:12:03.426     22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:12:03.426     22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:12:03.426     22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:12:03.426       22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_fs.sh
00:12:03.426      22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:12:03.426     22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:12:03.426     22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:12:03.426     22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:12:03.685     22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:12:03.685     22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:12:03.685     22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:12:03.685      22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:12:03.685      22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@2 -- # vhost_0_main_core=0
00:12:03.685      22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:12:03.685      22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:12:03.685      22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:12:03.685      22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:12:03.685      22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:12:03.685      22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:12:03.685      22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:12:03.685      22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:12:03.685      22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:12:03.685      22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:12:03.685      22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:12:03.685      22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:12:03.685      22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:12:03.686      22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:12:03.686      22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:12:03.686      22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:12:03.686      22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:12:03.686      22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:12:03.686      22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:12:03.686      22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:12:03.686      22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:12:03.686      22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:12:03.686      22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:12:03.686      22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:12:03.686     22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:12:03.686      22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:12:03.686      22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:12:03.686      22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:12:03.686      22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:12:03.686      22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:12:03.686      22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:12:03.686       22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:12:03.686        22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/cgroups.sh@244 -- # check_cgroup
00:12:03.686        22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:12:03.686        22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:12:03.686        22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/cgroups.sh@10 -- # echo 2
00:12:03.686       22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:12:03.686    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:12:03.686    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]]
00:12:03.686    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:12:03.686   22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/common.sh
00:12:03.686   22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@12 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/autotest.config
00:12:03.686    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@1 -- # vhost_0_reactor_mask='[0-3]'
00:12:03.686    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@2 -- # vhost_0_main_core=0
00:12:03.686    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@4 -- # VM_0_qemu_mask=4-5
00:12:03.686    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:12:03.686    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@7 -- # VM_1_qemu_mask=6-7
00:12:03.686    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:12:03.686    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@10 -- # VM_2_qemu_mask=8-9
00:12:03.686    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:12:03.686    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@14 -- # get_vhost_dir 0
00:12:03.686    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@105 -- # local vhost_name=0
00:12:03.686    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:12:03.686    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:12:03.686   22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@14 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:12:03.686   22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@16 -- # vhosttestinit
00:12:03.686   22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@37 -- # '[' '' == iso ']'
00:12:03.686   22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]]
00:12:03.686   22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:12:03.686   22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:12:03.686   22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@18 -- # trap 'error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:12:03.686   22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@20 -- # vfu_tgt_run 0
00:12:03.686   22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@6 -- # local vhost_name=0
00:12:03.686   22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@7 -- # local vfio_user_dir vfu_pid_file rpc_py
00:12:03.686    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@9 -- # get_vhost_dir 0
00:12:03.686    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@105 -- # local vhost_name=0
00:12:03.686    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:12:03.686    22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:12:03.686   22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@9 -- # vfio_user_dir=/root/vhost_test/vhost/0
00:12:03.686   22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@10 -- # vfu_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:12:03.686   22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@11 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:12:03.686   22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@13 -- # mkdir -p /root/vhost_test/vhost/0
00:12:03.686   22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@15 -- # timing_enter vfu_tgt_start
00:12:03.686   22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:12:03.686   22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:12:03.686   22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@17 -- # vfupid=120418
00:12:03.686   22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@16 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -r /root/vhost_test/vhost/0/rpc.sock -m 0xf -s 512
00:12:03.686   22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@18 -- # echo 120418
00:12:03.686   22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@20 -- # echo 'Process pid: 120418'
00:12:03.686  Process pid: 120418
00:12:03.686   22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@21 -- # echo 'waiting for app to run...'
00:12:03.686  waiting for app to run...
00:12:03.686   22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@22 -- # waitforlisten 120418 /root/vhost_test/vhost/0/rpc.sock
00:12:03.686   22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@835 -- # '[' -z 120418 ']'
00:12:03.686   22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@839 -- # local rpc_addr=/root/vhost_test/vhost/0/rpc.sock
00:12:03.686   22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@840 -- # local max_retries=100
00:12:03.686   22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...'
00:12:03.686  Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...
00:12:03.686   22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@844 -- # xtrace_disable
00:12:03.686   22:39:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:12:03.686  [2024-12-10 22:39:04.342215] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:12:03.686  [2024-12-10 22:39:04.342329] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xf -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120418 ]
00:12:03.686  EAL: No free 2048 kB hugepages reported on node 1
00:12:03.945  [2024-12-10 22:39:04.601132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:12:04.203  [2024-12-10 22:39:04.737217] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:12:04.203  [2024-12-10 22:39:04.737265] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:12:04.203  [2024-12-10 22:39:04.737329] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:12:04.203  [2024-12-10 22:39:04.737337] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3
00:12:05.140   22:39:05 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:12:05.140   22:39:05 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@868 -- # return 0
00:12:05.140   22:39:05 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@24 -- # timing_exit vfu_tgt_start
00:12:05.140   22:39:05 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:12:05.140   22:39:05 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:12:05.140   22:39:05 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@22 -- # vfu_vm_dir=/root/vhost_test/vms/vfu_tgt
00:12:05.140   22:39:05 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@23 -- # rm -rf /root/vhost_test/vms/vfu_tgt
00:12:05.140   22:39:05 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@24 -- # mkdir -p /root/vhost_test/vms/vfu_tgt
00:12:05.140   22:39:05 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@27 -- # disk_no=1
00:12:05.141   22:39:05 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@28 -- # vm_num=1
00:12:05.141   22:39:05 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@29 -- # job_file=default_fsdev.job
00:12:05.141   22:39:05 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@30 -- # be_virtiofs_dir=/tmp/vfio-test.1
00:12:05.141   22:39:05 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@31 -- # vm_virtiofs_dir=/tmp/virtiofs.1
00:12:05.141   22:39:05 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@33 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_tgt_set_base_path /root/vhost_test/vms/vfu_tgt
00:12:05.141   22:39:05 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@35 -- # rm -rf /tmp/vfio-test.1
00:12:05.399   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@36 -- # mkdir -p /tmp/vfio-test.1
00:12:05.399    22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@39 -- # mktemp --tmpdir=/tmp/vfio-test.1
00:12:05.399   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@39 -- # tmpfile=/tmp/vfio-test.1/tmp.AZxceJDlPw
00:12:05.399   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@41 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock fsdev_aio_create aio.1 /tmp/vfio-test.1
00:12:05.658  aio.1
00:12:05.658   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@42 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_virtio_create_fs_endpoint virtio.1 --fsdev-name aio.1 --tag vfu_test.1 --num-queues=2 --qsize=512 --packed-ring
00:12:05.916   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@45 -- # vm_setup --disk-type=vfio_user_virtio --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1
00:12:05.916   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@518 -- # xtrace_disable
00:12:05.916   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:12:05.916  WARN: removing existing VM in '/root/vhost_test/vms/1'
00:12:05.916  INFO: Creating new VM in /root/vhost_test/vms/1
00:12:05.916  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:12:05.916  INFO: TASK MASK: 6-7
00:12:05.916   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@671 -- # local node_num=0
00:12:05.916   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@672 -- # local boot_disk_present=false
00:12:05.916   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:12:05.916   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:12:05.916   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:12:05.916   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:12:05.916   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:12:05.916   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:05.916   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:12:05.916   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:12:05.916  INFO: NUMA NODE: 0
00:12:05.916   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:12:05.916   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:12:05.916   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:12:05.916   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@677 -- # [[ -n '' ]]
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@686 -- # [[ -z '' ]]
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@701 -- # IFS=,
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@701 -- # read -r disk disk_type _
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@702 -- # [[ -z '' ]]
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@702 -- # disk_type=vfio_user_virtio
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@704 -- # case $disk_type in
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@766 -- # notice 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:12:05.917  INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@767 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/vfu_tgt/virtio.$disk")
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@768 -- # [[ 1 == '' ]]
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@780 -- # [[ -n '' ]]
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@785 -- # (( 0 ))
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:12:05.917  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@787 -- # cat
00:12:05.917    22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/vfu_tgt/virtio.1
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@827 -- # echo 10100
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@828 -- # echo 10101
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@829 -- # echo 10102
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@832 -- # [[ -z '' ]]
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@834 -- # echo 10104
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@835 -- # echo 101
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@837 -- # [[ -z '' ]]
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@838 -- # [[ -z '' ]]
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@46 -- # vm_run 1
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@843 -- # local run_all=false
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@844 -- # local vms_to_run=
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@846 -- # getopts a-: optchar
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@856 -- # false
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@859 -- # shift 0
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@860 -- # for vm in "$@"
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@861 -- # vm_num_is_valid 1
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@871 -- # vm_is_running 1
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@373 -- # return 1
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:12:05.917  INFO: running /root/vhost_test/vms/1/run.sh
00:12:05.917   22:39:06 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:12:05.917  Running VM in /root/vhost_test/vms/1
00:12:06.176  [2024-12-10 22:39:06.864562] tgt_endpoint.c: 167:tgt_accept_poller: *NOTICE*: /root/vhost_test/vms/vfu_tgt/virtio.1: attached successfully
00:12:06.176  Waiting for QEMU pid file
00:12:07.554  === qemu.log ===
00:12:07.554  === qemu.log ===
00:12:07.554   22:39:07 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@47 -- # vm_wait_for_boot 60 1
00:12:07.554   22:39:07 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@913 -- # assert_number 60
00:12:07.554   22:39:07 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:12:07.554   22:39:07 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@281 -- # return 0
00:12:07.554   22:39:07 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@915 -- # xtrace_disable
00:12:07.554   22:39:07 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:12:07.554  INFO: Waiting for VMs to boot
00:12:07.554  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:12:29.491  
00:12:29.491  INFO: VM1 ready
00:12:29.491  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:12:29.749  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:12:30.686  INFO: all VMs ready
00:12:30.686   22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@973 -- # return 0
00:12:30.686   22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@49 -- # vm_exec 1 'mkdir /tmp/virtiofs.1'
00:12:30.686   22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:12:30.686   22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:30.686   22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:12:30.686   22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:12:30.686   22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:12:30.686    22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:12:30.686    22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:12:30.686    22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:30.686    22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:12:30.686    22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:12:30.686    22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:12:30.686   22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'mkdir /tmp/virtiofs.1'
00:12:30.686  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:12:30.945   22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@50 -- # vm_exec 1 'mount -t virtiofs vfu_test.1 /tmp/virtiofs.1'
00:12:30.945   22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:12:30.945   22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:30.945   22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:12:30.945   22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:12:30.945   22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:12:30.945    22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:12:30.945    22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:12:30.945    22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:30.945    22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:12:30.945    22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:12:30.945    22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:12:30.945   22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'mount -t virtiofs vfu_test.1 /tmp/virtiofs.1'
00:12:30.945  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:12:31.204    22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@52 -- # basename /tmp/vfio-test.1/tmp.AZxceJDlPw
00:12:31.204   22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@52 -- # vm_exec 1 'ls /tmp/virtiofs.1/tmp.AZxceJDlPw'
00:12:31.204   22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:12:31.204   22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:31.204   22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:12:31.204   22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:12:31.204   22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:12:31.204    22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:12:31.204    22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:12:31.204    22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:31.204    22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:12:31.204    22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:12:31.204    22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:12:31.204   22:39:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'ls /tmp/virtiofs.1/tmp.AZxceJDlPw'
00:12:31.204  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:12:31.463  /tmp/virtiofs.1/tmp.AZxceJDlPw
00:12:31.463   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@53 -- # vm_start_fio_server --fio-bin=/usr/src/fio-static/fio 1
00:12:31.463   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@977 -- # local OPTIND optchar
00:12:31.463   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@978 -- # local readonly=
00:12:31.463   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@979 -- # local fio_bin=
00:12:31.463   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:12:31.463   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@981 -- # case "$optchar" in
00:12:31.463   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@983 -- # case "$OPTARG" in
00:12:31.463   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@984 -- # local fio_bin=/usr/src/fio-static/fio
00:12:31.463   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:12:31.463   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@993 -- # shift 1
00:12:31.463   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@994 -- # for vm_num in "$@"
00:12:31.463   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@995 -- # notice 'Starting fio server on VM1'
00:12:31.463   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'Starting fio server on VM1'
00:12:31.463   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:12:31.463   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:12:31.463   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:12:31.463   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:31.463   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:12:31.463   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Starting fio server on VM1'
00:12:31.463  INFO: Starting fio server on VM1
00:12:31.463   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@996 -- # [[ /usr/src/fio-static/fio != '' ]]
00:12:31.463   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@997 -- # vm_exec 1 'cat > /root/fio; chmod +x /root/fio'
00:12:31.463   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:12:31.463   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:31.463   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:12:31.463   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:12:31.463   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:12:31.463    22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:12:31.463    22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:12:31.463    22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:31.463    22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:12:31.463    22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:12:31.463    22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:12:31.464   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/fio; chmod +x /root/fio'
00:12:31.464  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:12:31.723   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@998 -- # vm_exec 1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:12:31.723   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:12:31.723   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:31.723   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:12:31.723   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:12:31.723   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:12:31.723    22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:12:31.723    22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:12:31.723    22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:31.723    22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:12:31.723    22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:12:31.723    22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:12:31.723   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:12:31.723  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:12:31.982   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@54 -- # run_fio --fio-bin=/usr/src/fio-static/fio --job-file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_fsdev.job --out=/root/vhost_test/fio_results --vm=1:/tmp/virtiofs.1/test
00:12:31.982   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1053 -- # local arg
00:12:31.982   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1054 -- # local job_file=
00:12:31.982   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1055 -- # local fio_bin=
00:12:31.982   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1056 -- # vms=()
00:12:31.982   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1056 -- # local vms
00:12:31.982   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1057 -- # local out=
00:12:31.982   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1058 -- # local vm
00:12:31.982   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1059 -- # local run_server_mode=true
00:12:31.982   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1060 -- # local run_plugin_mode=false
00:12:31.982   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1061 -- # local fio_start_cmd
00:12:31.982   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1062 -- # local fio_output_format=normal
00:12:31.982   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1063 -- # local fio_gtod_reduce=false
00:12:31.982   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1064 -- # local wait_for_fio=true
00:12:31.982   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:12:31.982   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:12:31.982   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1069 -- # local fio_bin=/usr/src/fio-static/fio
00:12:31.982   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:12:31.982   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:12:31.982   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1068 -- # local job_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_fsdev.job
00:12:31.982   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:12:31.982   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:12:31.982   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1072 -- # local out=/root/vhost_test/fio_results
00:12:31.982   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1073 -- # mkdir -p /root/vhost_test/fio_results
00:12:31.982   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:12:31.982   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:12:31.983   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1070 -- # vms+=("${arg#*=}")
00:12:31.983   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1092 -- # [[ -n /usr/src/fio-static/fio ]]
00:12:31.983   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1092 -- # [[ ! -r /usr/src/fio-static/fio ]]
00:12:31.983   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1097 -- # [[ -z /usr/src/fio-static/fio ]]
00:12:31.983   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1101 -- # [[ ! -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_fsdev.job ]]
00:12:31.983   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1106 -- # fio_start_cmd='/usr/src/fio-static/fio --eta=never '
00:12:31.983   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1108 -- # local job_fname
00:12:31.983    22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1109 -- # basename /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_fsdev.job
00:12:31.983   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1109 -- # job_fname=default_fsdev.job
00:12:31.983   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1110 -- # log_fname=default_fsdev.log
00:12:31.983   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1111 -- # fio_start_cmd+=' --output=/root/vhost_test/fio_results/default_fsdev.log --output-format=normal '
00:12:31.983   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1114 -- # for vm in "${vms[@]}"
00:12:31.983   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1115 -- # local vm_num=1
00:12:31.983   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1116 -- # local vmdisks=/tmp/virtiofs.1/test
00:12:31.983   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1118 -- # sed 's@filename=@filename=/tmp/virtiofs.1/test@;s@description=\(.*\)@description=\1 (VM=1)@' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_fsdev.job
00:12:31.983   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1119 -- # vm_exec 1 'cat > /root/default_fsdev.job'
00:12:31.983   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:12:31.983   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:31.983   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:12:31.983   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:12:31.983   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:12:31.983    22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:12:31.983    22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:12:31.983    22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:31.983    22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:12:31.983    22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:12:31.983    22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:12:31.983   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/default_fsdev.job'
00:12:31.983  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:12:32.241   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1121 -- # false
00:12:32.241   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1125 -- # vm_exec 1 cat /root/default_fsdev.job
00:12:32.241   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:12:32.241   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:32.241   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:12:32.241   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:12:32.241   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:12:32.241    22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:12:32.241    22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:12:32.242    22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:32.242    22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:12:32.242    22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:12:32.242    22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:12:32.242   22:39:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 cat /root/default_fsdev.job
00:12:32.242  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:12:32.500  [global]
00:12:32.500  blocksize=4k
00:12:32.500  iodepth=512
00:12:32.500  ioengine=libaio
00:12:32.500  size=1G
00:12:32.500  group_reporting
00:12:32.500  thread
00:12:32.500  numjobs=1
00:12:32.500  direct=1
00:12:32.500  invalidate=1
00:12:32.500  rw=randrw
00:12:32.500  do_verify=1
00:12:32.500  filename=/tmp/virtiofs.1/test
00:12:32.500  [job0]
00:12:32.500   22:39:33 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1127 -- # true
00:12:32.500    22:39:33 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1128 -- # vm_fio_socket 1
00:12:32.500    22:39:33 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@326 -- # vm_num_is_valid 1
00:12:32.500    22:39:33 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:32.500    22:39:33 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:12:32.500    22:39:33 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@327 -- # local vm_dir=/root/vhost_test/vms/1
00:12:32.500    22:39:33 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@329 -- # cat /root/vhost_test/vms/1/fio_socket
00:12:32.500   22:39:33 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1128 -- # fio_start_cmd+='--client=127.0.0.1,10101 --remote-config /root/default_fsdev.job '
00:12:32.500   22:39:33 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1131 -- # true
00:12:32.500   22:39:33 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1147 -- # true
00:12:32.500   22:39:33 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1161 -- # /usr/src/fio-static/fio --eta=never --output=/root/vhost_test/fio_results/default_fsdev.log --output-format=normal --client=127.0.0.1,10101 --remote-config /root/default_fsdev.job
00:12:59.050   22:39:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1162 -- # sleep 1
00:12:59.050   22:39:57 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1164 -- # [[ normal == \j\s\o\n ]]
00:12:59.050   22:39:57 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1172 -- # [[ ! -n '' ]]
00:12:59.050   22:39:57 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1173 -- # cat /root/vhost_test/fio_results/default_fsdev.log
00:12:59.050  hostname=vhostfedora-cloud-23052, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-3.35, flags=1
00:12:59.050  <vhostfedora-cloud-23052> job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=512
00:12:59.050  <vhostfedora-cloud-23052> Starting 1 thread
00:12:59.050  <vhostfedora-cloud-23052> job0: Laying out IO file (1 file / 1024MiB)
00:12:59.050  <vhostfedora-cloud-23052> 
00:12:59.050  job0: (groupid=0, jobs=1): err= 0: pid=968: Tue Dec 10 22:39:56 2024
00:12:59.050    read: IOPS=25.6k, BW=100MiB/s (105MB/s)(512MiB/5116msec)
00:12:59.050      slat (nsec): min=1469, max=543854, avg=4786.54, stdev=5476.27
00:12:59.050      clat (usec): min=1691, max=19190, avg=10040.55, stdev=402.04
00:12:59.050       lat (usec): min=1694, max=19194, avg=10045.34, stdev=402.06
00:12:59.050      clat percentiles (usec):
00:12:59.050       |  1.00th=[ 9372],  5.00th=[ 9765], 10.00th=[ 9896], 20.00th=[ 9896],
00:12:59.050       | 30.00th=[10028], 40.00th=[10028], 50.00th=[10028], 60.00th=[10028],
00:12:59.050       | 70.00th=[10159], 80.00th=[10159], 90.00th=[10159], 95.00th=[10290],
00:12:59.050       | 99.00th=[10552], 99.50th=[11207], 99.90th=[14353], 99.95th=[17171],
00:12:59.050       | 99.99th=[19006]
00:12:59.050     bw (  KiB/s): min=101224, max=103888, per=100.00%, avg=102484.80, stdev=815.37, samples=10
00:12:59.050     iops        : min=25306, max=25972, avg=25621.20, stdev=203.84, samples=10
00:12:59.050    write: IOPS=25.6k, BW=100MiB/s (105MB/s)(512MiB/5116msec); 0 zone resets
00:12:59.050      slat (nsec): min=1693, max=1524.4k, avg=5473.54, stdev=7152.01
00:12:59.050      clat (usec): min=1647, max=19196, avg=9925.45, stdev=391.50
00:12:59.050       lat (usec): min=1649, max=19200, avg=9930.92, stdev=391.54
00:12:59.050      clat percentiles (usec):
00:12:59.050       |  1.00th=[ 9241],  5.00th=[ 9765], 10.00th=[ 9765], 20.00th=[ 9765],
00:12:59.050       | 30.00th=[ 9896], 40.00th=[ 9896], 50.00th=[ 9896], 60.00th=[ 9896],
00:12:59.050       | 70.00th=[10028], 80.00th=[10028], 90.00th=[10159], 95.00th=[10159],
00:12:59.050       | 99.00th=[10421], 99.50th=[10683], 99.90th=[14222], 99.95th=[16581],
00:12:59.050       | 99.99th=[19006]
00:12:59.050     bw (  KiB/s): min=101296, max=103456, per=100.00%, avg=102532.00, stdev=819.50, samples=10
00:12:59.050     iops        : min=25324, max=25864, avg=25633.00, stdev=204.87, samples=10
00:12:59.050    lat (msec)   : 2=0.02%, 4=0.05%, 10=55.84%, 20=44.09%
00:12:59.050    cpu          : usr=12.10%, sys=27.25%, ctx=9875, majf=0, minf=7
00:12:59.050    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
00:12:59.050       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:12:59.050       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:12:59.050       issued rwts: total=131040,131104,0,0 short=0,0,0,0 dropped=0,0,0,0
00:12:59.050       latency   : target=0, window=0, percentile=100.00%, depth=512
00:12:59.050  
00:12:59.050  Run status group 0 (all jobs):
00:12:59.050     READ: bw=100MiB/s (105MB/s), 100MiB/s-100MiB/s (105MB/s-105MB/s), io=512MiB (537MB), run=5116-5116msec
00:12:59.050    WRITE: bw=100MiB/s (105MB/s), 100MiB/s-100MiB/s (105MB/s-105MB/s), io=512MiB (537MB), run=5116-5116msec
00:12:59.050   22:39:57 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@55 -- # vm_exec 1 'umount /tmp/virtiofs.1'
00:12:59.050   22:39:57 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:12:59.050   22:39:57 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:59.050   22:39:57 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:12:59.050   22:39:57 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:12:59.050   22:39:57 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:12:59.050    22:39:57 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:12:59.050    22:39:57 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:12:59.050    22:39:57 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:59.050    22:39:57 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:12:59.050    22:39:57 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:12:59.050    22:39:57 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:12:59.050   22:39:57 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'umount /tmp/virtiofs.1'
00:12:59.050  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@58 -- # notice 'Shutting down virtual machine...'
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine...'
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine...'
00:12:59.050  INFO: Shutting down virtual machine...
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@59 -- # vm_shutdown_all
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:12:59.050    22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@489 -- # vm_list_all
00:12:59.050    22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@466 -- # vms=()
00:12:59.050    22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@466 -- # local vms
00:12:59.050    22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:12:59.050    22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:12:59.050    22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@492 -- # vm_shutdown 1
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@424 -- # vm_is_running 1
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@376 -- # local vm_pid
00:12:59.050    22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@377 -- # vm_pid=121257
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@379 -- # /bin/kill -0 121257
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@380 -- # return 0
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:12:59.050  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@432 -- # set +e
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:12:59.050    22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:12:59.050    22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:12:59.050    22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:59.050    22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:12:59.050    22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:12:59.050    22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:12:59.050  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:12:59.050  INFO: VM1 is shutting down - wait a while to complete
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@435 -- # set -e
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:12:59.050  INFO: Waiting for VMs to shutdown...
00:12:59.050   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:12:59.051   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:12:59.051   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@498 -- # vm_is_running 1
00:12:59.051   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:12:59.051   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:59.051   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:12:59.051   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:12:59.051   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:12:59.051   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@376 -- # local vm_pid
00:12:59.051    22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:12:59.051   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@377 -- # vm_pid=121257
00:12:59.051   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@379 -- # /bin/kill -0 121257
00:12:59.051   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@380 -- # return 0
00:12:59.051   22:39:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@500 -- # sleep 1
00:12:59.051   22:39:59 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:12:59.051   22:39:59 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:12:59.051   22:39:59 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@498 -- # vm_is_running 1
00:12:59.051   22:39:59 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:12:59.051   22:39:59 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:59.051   22:39:59 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:12:59.051   22:39:59 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:12:59.051   22:39:59 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:12:59.051   22:39:59 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@376 -- # local vm_pid
00:12:59.051    22:39:59 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:12:59.051   22:39:59 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@377 -- # vm_pid=121257
00:12:59.051   22:39:59 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@379 -- # /bin/kill -0 121257
00:12:59.051   22:39:59 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@380 -- # return 0
00:12:59.051   22:39:59 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@500 -- # sleep 1
00:12:59.985   22:40:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:12:59.985   22:40:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:12:59.985   22:40:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@498 -- # vm_is_running 1
00:12:59.985   22:40:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:12:59.985   22:40:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:59.985   22:40:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:12:59.985   22:40:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:12:59.985   22:40:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:12:59.985   22:40:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@373 -- # return 1
00:12:59.985   22:40:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:12:59.985   22:40:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@500 -- # sleep 1
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:13:00.921  INFO: All VMs successfully shut down
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@505 -- # return 0
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@61 -- # vhost_kill 0
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@202 -- # local rc=0
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@203 -- # local vhost_name=0
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@205 -- # [[ -z 0 ]]
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@210 -- # local vhost_dir
00:13:00.921    22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@211 -- # get_vhost_dir 0
00:13:00.921    22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@105 -- # local vhost_name=0
00:13:00.921    22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:13:00.921    22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]]
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@219 -- # timing_enter vhost_kill
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@220 -- # local vhost_pid
00:13:00.921    22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@221 -- # cat /root/vhost_test/vhost/0/vhost.pid
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@221 -- # vhost_pid=120418
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@222 -- # notice 'killing vhost (PID 120418) app'
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'killing vhost (PID 120418) app'
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: killing vhost (PID 120418) app'
00:13:00.921  INFO: killing vhost (PID 120418) app
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@224 -- # kill -INT 120418
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@225 -- # notice 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: sent SIGINT to vhost app - waiting 60 seconds to exit'
00:13:00.921  INFO: sent SIGINT to vhost app - waiting 60 seconds to exit
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i = 0 ))
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i < 60 ))
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@227 -- # kill -0 120418
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@228 -- # echo .
00:13:00.921  .
00:13:00.921   22:40:01 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@229 -- # sleep 1
00:13:01.857   22:40:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i++ ))
00:13:01.857   22:40:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i < 60 ))
00:13:01.857   22:40:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@227 -- # kill -0 120418
00:13:01.857   22:40:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@228 -- # echo .
00:13:01.857  .
00:13:01.857   22:40:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@229 -- # sleep 1
00:13:01.857  [2024-12-10 22:40:02.613791] vfu_virtio_fs.c: 301:_vfu_virtio_fs_fuse_dispatcher_delete_cpl: *NOTICE*: FUSE dispatcher deleted
00:13:02.793   22:40:03 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i++ ))
00:13:02.793   22:40:03 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i < 60 ))
00:13:02.793   22:40:03 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@227 -- # kill -0 120418
00:13:02.793   22:40:03 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@228 -- # echo .
00:13:02.793  .
00:13:02.793   22:40:03 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@229 -- # sleep 1
00:13:03.727   22:40:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i++ ))
00:13:03.727   22:40:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i < 60 ))
00:13:03.727   22:40:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@227 -- # kill -0 120418
00:13:03.727  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 227: kill: (120418) - No such process
00:13:03.727   22:40:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@231 -- # break
00:13:03.727   22:40:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@234 -- # kill -0 120418
00:13:03.727  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 234: kill: (120418) - No such process
00:13:03.727   22:40:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@239 -- # kill -0 120418
00:13:03.727  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 239: kill: (120418) - No such process
00:13:03.727   22:40:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@245 -- # is_pid_child 120418
00:13:03.727   22:40:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1686 -- # local pid=120418 _pid
00:13:03.727   22:40:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1688 -- # read -r _pid
00:13:03.727    22:40:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1685 -- # jobs -pr
00:13:03.727   22:40:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1689 -- # (( pid == _pid ))
00:13:03.727   22:40:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1688 -- # read -r _pid
00:13:03.727   22:40:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1692 -- # return 1
00:13:03.727   22:40:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@257 -- # timing_exit vhost_kill
00:13:03.727   22:40:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:13:03.727   22:40:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:13:03.727   22:40:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@259 -- # rm -rf /root/vhost_test/vhost/0
00:13:03.727   22:40:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@261 -- # return 0
00:13:03.727   22:40:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@63 -- # vhosttestfini
00:13:03.727   22:40:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@54 -- # '[' '' == iso ']'
00:13:03.727  
00:13:03.727  real	1m0.410s
00:13:03.727  user	3m54.046s
00:13:03.727  sys	0m3.453s
00:13:03.727   22:40:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:03.727   22:40:04 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:13:03.727  ************************************
00:13:03.727  END TEST vfio_user_virtio_fs_fio
00:13:03.727  ************************************
00:13:03.986   22:40:04 vfio_user_qemu -- vfio_user/vfio_user.sh@26 -- # vhosttestfini
00:13:03.986   22:40:04 vfio_user_qemu -- vhost/common.sh@54 -- # '[' iso == iso ']'
00:13:03.986   22:40:04 vfio_user_qemu -- vhost/common.sh@55 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh reset
00:13:04.920  Waiting for block devices as requested
00:13:05.178  0000:00:04.7 (8086 6f27): vfio-pci -> ioatdma
00:13:05.178  0000:00:04.6 (8086 6f26): vfio-pci -> ioatdma
00:13:05.178  0000:00:04.5 (8086 6f25): vfio-pci -> ioatdma
00:13:05.178  0000:00:04.4 (8086 6f24): vfio-pci -> ioatdma
00:13:05.437  0000:00:04.3 (8086 6f23): vfio-pci -> ioatdma
00:13:05.437  0000:00:04.2 (8086 6f22): vfio-pci -> ioatdma
00:13:05.437  0000:00:04.1 (8086 6f21): vfio-pci -> ioatdma
00:13:05.437  0000:00:04.0 (8086 6f20): vfio-pci -> ioatdma
00:13:05.437  0000:80:04.7 (8086 6f27): vfio-pci -> ioatdma
00:13:05.696  0000:80:04.6 (8086 6f26): vfio-pci -> ioatdma
00:13:05.696  0000:80:04.5 (8086 6f25): vfio-pci -> ioatdma
00:13:05.696  0000:80:04.4 (8086 6f24): vfio-pci -> ioatdma
00:13:05.696  0000:80:04.3 (8086 6f23): vfio-pci -> ioatdma
00:13:05.955  0000:80:04.2 (8086 6f22): vfio-pci -> ioatdma
00:13:05.955  0000:80:04.1 (8086 6f21): vfio-pci -> ioatdma
00:13:05.955  0000:80:04.0 (8086 6f20): vfio-pci -> ioatdma
00:13:06.214  0000:0d:00.0 (8086 0a54): vfio-pci -> nvme
00:13:06.214  
00:13:06.214  real	7m21.234s
00:13:06.214  user	30m47.417s
00:13:06.214  sys	0m16.885s
00:13:06.214   22:40:06 vfio_user_qemu -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:06.214   22:40:06 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:06.214  ************************************
00:13:06.214  END TEST vfio_user_qemu
00:13:06.214  ************************************
00:13:06.214   22:40:06  -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']'
00:13:06.214   22:40:06  -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']'
00:13:06.214   22:40:06  -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']'
00:13:06.214   22:40:06  -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']'
00:13:06.214   22:40:06  -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']'
00:13:06.214   22:40:06  -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']'
00:13:06.214   22:40:06  -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']'
00:13:06.214   22:40:06  -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']'
00:13:06.214   22:40:06  -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']'
00:13:06.214   22:40:06  -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]]
00:13:06.214   22:40:06  -- spdk/autotest.sh@370 -- # [[ 1 -eq 1 ]]
00:13:06.214   22:40:06  -- spdk/autotest.sh@371 -- # run_test sma /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/sma.sh
00:13:06.214   22:40:06  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:06.214   22:40:06  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:06.214   22:40:06  -- common/autotest_common.sh@10 -- # set +x
00:13:06.214  ************************************
00:13:06.214  START TEST sma
00:13:06.214  ************************************
00:13:06.214   22:40:06 sma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/sma.sh
00:13:06.214  * Looking for test storage...
00:13:06.214  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:13:06.214    22:40:06 sma -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:13:06.214     22:40:06 sma -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:13:06.214     22:40:06 sma -- common/autotest_common.sh@1711 -- # lcov --version
00:13:06.474    22:40:07 sma -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:13:06.474    22:40:07 sma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:13:06.474    22:40:07 sma -- scripts/common.sh@333 -- # local ver1 ver1_l
00:13:06.474    22:40:07 sma -- scripts/common.sh@334 -- # local ver2 ver2_l
00:13:06.474    22:40:07 sma -- scripts/common.sh@336 -- # IFS=.-:
00:13:06.474    22:40:07 sma -- scripts/common.sh@336 -- # read -ra ver1
00:13:06.474    22:40:07 sma -- scripts/common.sh@337 -- # IFS=.-:
00:13:06.474    22:40:07 sma -- scripts/common.sh@337 -- # read -ra ver2
00:13:06.474    22:40:07 sma -- scripts/common.sh@338 -- # local 'op=<'
00:13:06.474    22:40:07 sma -- scripts/common.sh@340 -- # ver1_l=2
00:13:06.474    22:40:07 sma -- scripts/common.sh@341 -- # ver2_l=1
00:13:06.474    22:40:07 sma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:13:06.474    22:40:07 sma -- scripts/common.sh@344 -- # case "$op" in
00:13:06.474    22:40:07 sma -- scripts/common.sh@345 -- # : 1
00:13:06.474    22:40:07 sma -- scripts/common.sh@364 -- # (( v = 0 ))
00:13:06.474    22:40:07 sma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:13:06.474     22:40:07 sma -- scripts/common.sh@365 -- # decimal 1
00:13:06.474     22:40:07 sma -- scripts/common.sh@353 -- # local d=1
00:13:06.474     22:40:07 sma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:06.474     22:40:07 sma -- scripts/common.sh@355 -- # echo 1
00:13:06.474    22:40:07 sma -- scripts/common.sh@365 -- # ver1[v]=1
00:13:06.474     22:40:07 sma -- scripts/common.sh@366 -- # decimal 2
00:13:06.474     22:40:07 sma -- scripts/common.sh@353 -- # local d=2
00:13:06.474     22:40:07 sma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:13:06.474     22:40:07 sma -- scripts/common.sh@355 -- # echo 2
00:13:06.474    22:40:07 sma -- scripts/common.sh@366 -- # ver2[v]=2
00:13:06.474    22:40:07 sma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:13:06.474    22:40:07 sma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:13:06.474    22:40:07 sma -- scripts/common.sh@368 -- # return 0
00:13:06.474    22:40:07 sma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:13:06.474    22:40:07 sma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:13:06.474  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:06.474  		--rc genhtml_branch_coverage=1
00:13:06.474  		--rc genhtml_function_coverage=1
00:13:06.474  		--rc genhtml_legend=1
00:13:06.474  		--rc geninfo_all_blocks=1
00:13:06.474  		--rc geninfo_unexecuted_blocks=1
00:13:06.474  		
00:13:06.474  		'
00:13:06.474    22:40:07 sma -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:13:06.474  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:06.474  		--rc genhtml_branch_coverage=1
00:13:06.474  		--rc genhtml_function_coverage=1
00:13:06.474  		--rc genhtml_legend=1
00:13:06.474  		--rc geninfo_all_blocks=1
00:13:06.474  		--rc geninfo_unexecuted_blocks=1
00:13:06.474  		
00:13:06.474  		'
00:13:06.474    22:40:07 sma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:13:06.474  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:06.474  		--rc genhtml_branch_coverage=1
00:13:06.474  		--rc genhtml_function_coverage=1
00:13:06.474  		--rc genhtml_legend=1
00:13:06.474  		--rc geninfo_all_blocks=1
00:13:06.474  		--rc geninfo_unexecuted_blocks=1
00:13:06.474  		
00:13:06.474  		'
00:13:06.474    22:40:07 sma -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:13:06.474  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:06.474  		--rc genhtml_branch_coverage=1
00:13:06.474  		--rc genhtml_function_coverage=1
00:13:06.474  		--rc genhtml_legend=1
00:13:06.474  		--rc geninfo_all_blocks=1
00:13:06.474  		--rc geninfo_unexecuted_blocks=1
00:13:06.474  		
00:13:06.474  		'
00:13:06.474   22:40:07 sma -- sma/sma.sh@11 -- # run_test sma_nvmf_tcp /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/nvmf_tcp.sh
00:13:06.474   22:40:07 sma -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:06.474   22:40:07 sma -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:06.474   22:40:07 sma -- common/autotest_common.sh@10 -- # set +x
00:13:06.474  ************************************
00:13:06.474  START TEST sma_nvmf_tcp
00:13:06.474  ************************************
00:13:06.474   22:40:07 sma.sma_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/nvmf_tcp.sh
00:13:06.474  * Looking for test storage...
00:13:06.474  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:13:06.474    22:40:07 sma.sma_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:13:06.474     22:40:07 sma.sma_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:13:06.474     22:40:07 sma.sma_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version
00:13:06.474    22:40:07 sma.sma_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:13:06.474    22:40:07 sma.sma_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:13:06.474    22:40:07 sma.sma_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l
00:13:06.474    22:40:07 sma.sma_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l
00:13:06.475    22:40:07 sma.sma_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-:
00:13:06.475    22:40:07 sma.sma_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1
00:13:06.475    22:40:07 sma.sma_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-:
00:13:06.475    22:40:07 sma.sma_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2
00:13:06.475    22:40:07 sma.sma_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<'
00:13:06.475    22:40:07 sma.sma_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2
00:13:06.475    22:40:07 sma.sma_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1
00:13:06.475    22:40:07 sma.sma_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:13:06.475    22:40:07 sma.sma_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in
00:13:06.475    22:40:07 sma.sma_nvmf_tcp -- scripts/common.sh@345 -- # : 1
00:13:06.475    22:40:07 sma.sma_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 ))
00:13:06.475    22:40:07 sma.sma_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:13:06.475     22:40:07 sma.sma_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1
00:13:06.475     22:40:07 sma.sma_nvmf_tcp -- scripts/common.sh@353 -- # local d=1
00:13:06.475     22:40:07 sma.sma_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:06.475     22:40:07 sma.sma_nvmf_tcp -- scripts/common.sh@355 -- # echo 1
00:13:06.475    22:40:07 sma.sma_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1
00:13:06.475     22:40:07 sma.sma_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2
00:13:06.475     22:40:07 sma.sma_nvmf_tcp -- scripts/common.sh@353 -- # local d=2
00:13:06.475     22:40:07 sma.sma_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:13:06.475     22:40:07 sma.sma_nvmf_tcp -- scripts/common.sh@355 -- # echo 2
00:13:06.475    22:40:07 sma.sma_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2
00:13:06.475    22:40:07 sma.sma_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:13:06.475    22:40:07 sma.sma_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:13:06.475    22:40:07 sma.sma_nvmf_tcp -- scripts/common.sh@368 -- # return 0
00:13:06.475    22:40:07 sma.sma_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:13:06.475    22:40:07 sma.sma_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:13:06.475  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:06.475  		--rc genhtml_branch_coverage=1
00:13:06.475  		--rc genhtml_function_coverage=1
00:13:06.475  		--rc genhtml_legend=1
00:13:06.475  		--rc geninfo_all_blocks=1
00:13:06.475  		--rc geninfo_unexecuted_blocks=1
00:13:06.475  		
00:13:06.475  		'
00:13:06.475    22:40:07 sma.sma_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:13:06.475  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:06.475  		--rc genhtml_branch_coverage=1
00:13:06.475  		--rc genhtml_function_coverage=1
00:13:06.475  		--rc genhtml_legend=1
00:13:06.475  		--rc geninfo_all_blocks=1
00:13:06.475  		--rc geninfo_unexecuted_blocks=1
00:13:06.475  		
00:13:06.475  		'
00:13:06.475    22:40:07 sma.sma_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:13:06.475  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:06.475  		--rc genhtml_branch_coverage=1
00:13:06.475  		--rc genhtml_function_coverage=1
00:13:06.475  		--rc genhtml_legend=1
00:13:06.475  		--rc geninfo_all_blocks=1
00:13:06.475  		--rc geninfo_unexecuted_blocks=1
00:13:06.475  		
00:13:06.475  		'
00:13:06.475    22:40:07 sma.sma_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:13:06.475  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:06.475  		--rc genhtml_branch_coverage=1
00:13:06.475  		--rc genhtml_function_coverage=1
00:13:06.475  		--rc genhtml_legend=1
00:13:06.475  		--rc geninfo_all_blocks=1
00:13:06.475  		--rc geninfo_unexecuted_blocks=1
00:13:06.475  		
00:13:06.475  		'
00:13:06.475   22:40:07 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:13:06.475   22:40:07 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@70 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:13:06.475   22:40:07 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@72 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:13:06.475   22:40:07 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@73 -- # tgtpid=132630
00:13:06.475   22:40:07 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@83 -- # smapid=132631
00:13:06.475   22:40:07 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@86 -- # sma_waitforlisten
00:13:06.475   22:40:07 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@75 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:13:06.475   22:40:07 sma.sma_nvmf_tcp -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:13:06.475   22:40:07 sma.sma_nvmf_tcp -- sma/common.sh@8 -- # local sma_port=8080
00:13:06.475   22:40:07 sma.sma_nvmf_tcp -- sma/common.sh@10 -- # (( i = 0 ))
00:13:06.475    22:40:07 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@75 -- # cat
00:13:06.475   22:40:07 sma.sma_nvmf_tcp -- sma/common.sh@10 -- # (( i < 5 ))
00:13:06.475   22:40:07 sma.sma_nvmf_tcp -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:13:06.475   22:40:07 sma.sma_nvmf_tcp -- sma/common.sh@14 -- # sleep 1s
00:13:06.733  [2024-12-10 22:40:07.287858] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:13:06.733  [2024-12-10 22:40:07.287960] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132630 ]
00:13:06.733  EAL: No free 2048 kB hugepages reported on node 1
00:13:06.733  [2024-12-10 22:40:07.419132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:06.992  [2024-12-10 22:40:07.557161] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:13:07.561   22:40:08 sma.sma_nvmf_tcp -- sma/common.sh@10 -- # (( i++ ))
00:13:07.561   22:40:08 sma.sma_nvmf_tcp -- sma/common.sh@10 -- # (( i < 5 ))
00:13:07.561   22:40:08 sma.sma_nvmf_tcp -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:13:07.561   22:40:08 sma.sma_nvmf_tcp -- sma/common.sh@14 -- # sleep 1s
00:13:07.819  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:07.819  I0000 00:00:1733866808.545516  132631 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:07.819  [2024-12-10 22:40:08.557105] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:13:08.756   22:40:09 sma.sma_nvmf_tcp -- sma/common.sh@10 -- # (( i++ ))
00:13:08.756   22:40:09 sma.sma_nvmf_tcp -- sma/common.sh@10 -- # (( i < 5 ))
00:13:08.756   22:40:09 sma.sma_nvmf_tcp -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:13:08.756   22:40:09 sma.sma_nvmf_tcp -- sma/common.sh@12 -- # return 0
00:13:08.756   22:40:09 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@89 -- # rpc_cmd bdev_null_create null0 100 4096
00:13:08.756   22:40:09 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:08.757   22:40:09 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:08.757  null0
00:13:08.757   22:40:09 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:08.757   22:40:09 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@92 -- # rpc_cmd nvmf_get_transports --trtype tcp
00:13:08.757   22:40:09 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:08.757   22:40:09 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:08.757  [
00:13:08.757  {
00:13:08.757  "trtype": "TCP",
00:13:08.757  "max_queue_depth": 128,
00:13:08.757  "max_io_qpairs_per_ctrlr": 127,
00:13:08.757  "in_capsule_data_size": 4096,
00:13:08.757  "max_io_size": 131072,
00:13:08.757  "io_unit_size": 131072,
00:13:08.757  "max_aq_depth": 128,
00:13:08.757  "num_shared_buffers": 511,
00:13:08.757  "buf_cache_size": 4294967295,
00:13:08.757  "dif_insert_or_strip": false,
00:13:08.757  "zcopy": false,
00:13:08.757  "c2h_success": true,
00:13:08.757  "sock_priority": 0,
00:13:08.757  "abort_timeout_sec": 1,
00:13:08.757  "ack_timeout": 0,
00:13:08.757  "data_wr_pool_size": 0
00:13:08.757  }
00:13:08.757  ]
00:13:08.757   22:40:09 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:08.757    22:40:09 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@95 -- # create_device nqn.2016-06.io.spdk:cnode0
00:13:08.757    22:40:09 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@95 -- # jq -r .handle
00:13:08.757    22:40:09 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:08.757  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:08.757  I0000 00:00:1733866809.493647  133075 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:08.757  I0000 00:00:1733866809.495488  133075 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:08.757  I0000 00:00:1733866809.496891  133076 subchannel.cc:806] subchannel 0x556d96706de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x556d965a6840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x556d96720da0, grpc.internal.client_channel_call_destination=0x7f5df3a90390, grpc.internal.event_engine=0x556d96593490, grpc.internal.security_connector=0x556d966b82b0, grpc.internal.subchannel_pool=0x556d96575690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x556d962929a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:40:09.496335906+01:00"}), backing off for 1000 ms
00:13:08.757  [2024-12-10 22:40:09.516293] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:13:09.015   22:40:09 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@95 -- # devid0=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:13:09.015   22:40:09 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@96 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:13:09.015   22:40:09 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:09.015   22:40:09 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:09.015  [
00:13:09.015  {
00:13:09.015  "nqn": "nqn.2016-06.io.spdk:cnode0",
00:13:09.015  "subtype": "NVMe",
00:13:09.015  "listen_addresses": [
00:13:09.015  {
00:13:09.015  "trtype": "TCP",
00:13:09.015  "adrfam": "IPv4",
00:13:09.015  "traddr": "127.0.0.1",
00:13:09.015  "trsvcid": "4420"
00:13:09.015  }
00:13:09.015  ],
00:13:09.015  "allow_any_host": false,
00:13:09.015  "hosts": [],
00:13:09.015  "serial_number": "00000000000000000000",
00:13:09.015  "model_number": "SPDK bdev Controller",
00:13:09.015  "max_namespaces": 32,
00:13:09.015  "min_cntlid": 1,
00:13:09.015  "max_cntlid": 65519,
00:13:09.015  "namespaces": []
00:13:09.015  }
00:13:09.015  ]
00:13:09.015   22:40:09 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:09.015    22:40:09 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@98 -- # create_device nqn.2016-06.io.spdk:cnode1
00:13:09.015    22:40:09 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:09.015    22:40:09 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@98 -- # jq -r .handle
00:13:09.274  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:09.274  I0000 00:00:1733866809.837731  133100 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:09.274  I0000 00:00:1733866809.839379  133100 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:09.274  I0000 00:00:1733866809.840724  133299 subchannel.cc:806] subchannel 0x55cc450fede0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55cc44f9e840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55cc45118da0, grpc.internal.client_channel_call_destination=0x7fb1025f2390, grpc.internal.event_engine=0x55cc44f8b490, grpc.internal.security_connector=0x55cc450b02b0, grpc.internal.subchannel_pool=0x55cc44f6d690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55cc44c8a9a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:40:09.840296885+01:00"}), backing off for 1000 ms
00:13:09.274   22:40:09 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@98 -- # devid1=nvmf-tcp:nqn.2016-06.io.spdk:cnode1
00:13:09.274   22:40:09 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@99 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:13:09.274   22:40:09 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:09.274   22:40:09 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:09.274  [
00:13:09.274  {
00:13:09.274  "nqn": "nqn.2016-06.io.spdk:cnode0",
00:13:09.274  "subtype": "NVMe",
00:13:09.274  "listen_addresses": [
00:13:09.274  {
00:13:09.274  "trtype": "TCP",
00:13:09.274  "adrfam": "IPv4",
00:13:09.274  "traddr": "127.0.0.1",
00:13:09.274  "trsvcid": "4420"
00:13:09.274  }
00:13:09.274  ],
00:13:09.274  "allow_any_host": false,
00:13:09.274  "hosts": [],
00:13:09.274  "serial_number": "00000000000000000000",
00:13:09.274  "model_number": "SPDK bdev Controller",
00:13:09.274  "max_namespaces": 32,
00:13:09.274  "min_cntlid": 1,
00:13:09.274  "max_cntlid": 65519,
00:13:09.274  "namespaces": []
00:13:09.274  }
00:13:09.274  ]
00:13:09.274   22:40:09 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:09.274   22:40:09 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@100 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1
00:13:09.274   22:40:09 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:09.274   22:40:09 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:09.274  [
00:13:09.274  {
00:13:09.274  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:13:09.274  "subtype": "NVMe",
00:13:09.274  "listen_addresses": [
00:13:09.274  {
00:13:09.274  "trtype": "TCP",
00:13:09.274  "adrfam": "IPv4",
00:13:09.274  "traddr": "127.0.0.1",
00:13:09.274  "trsvcid": "4420"
00:13:09.274  }
00:13:09.274  ],
00:13:09.274  "allow_any_host": false,
00:13:09.274  "hosts": [],
00:13:09.274  "serial_number": "00000000000000000000",
00:13:09.274  "model_number": "SPDK bdev Controller",
00:13:09.274  "max_namespaces": 32,
00:13:09.274  "min_cntlid": 1,
00:13:09.274  "max_cntlid": 65519,
00:13:09.274  "namespaces": []
00:13:09.274  }
00:13:09.274  ]
00:13:09.274   22:40:09 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:09.274   22:40:09 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@101 -- # [[ nvmf-tcp:nqn.2016-06.io.spdk:cnode0 != \n\v\m\f\-\t\c\p\:\n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]]
00:13:09.274    22:40:09 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@104 -- # rpc_cmd nvmf_get_subsystems
00:13:09.274    22:40:09 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:09.274    22:40:09 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:09.274    22:40:09 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@104 -- # jq -r '. | length'
00:13:09.274    22:40:09 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:09.274   22:40:09 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@104 -- # [[ 3 -eq 3 ]]
00:13:09.274    22:40:09 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@108 -- # create_device nqn.2016-06.io.spdk:cnode0
00:13:09.274    22:40:09 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:09.274    22:40:09 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@108 -- # jq -r .handle
00:13:09.533  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:09.533  I0000 00:00:1733866810.163945  133325 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:09.533  I0000 00:00:1733866810.165915  133325 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:09.533  I0000 00:00:1733866810.167251  133329 subchannel.cc:806] subchannel 0x55e7bd204de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55e7bd0a4840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55e7bd21eda0, grpc.internal.client_channel_call_destination=0x7fdf368ee390, grpc.internal.event_engine=0x55e7bd091490, grpc.internal.security_connector=0x55e7bd1b62b0, grpc.internal.subchannel_pool=0x55e7bd073690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55e7bcd909a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:40:10.166706716+01:00"}), backing off for 999 ms
00:13:09.533   22:40:10 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@108 -- # tmp0=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:13:09.533    22:40:10 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@109 -- # create_device nqn.2016-06.io.spdk:cnode1
00:13:09.533    22:40:10 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:09.533    22:40:10 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@109 -- # jq -r .handle
00:13:09.792  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:09.792  I0000 00:00:1733866810.401003  133352 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:09.792  I0000 00:00:1733866810.402710  133352 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:09.792  I0000 00:00:1733866810.404070  133357 subchannel.cc:806] subchannel 0x55ed3b0f5de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55ed3af95840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55ed3b10fda0, grpc.internal.client_channel_call_destination=0x7fd9a1bd7390, grpc.internal.event_engine=0x55ed3af82490, grpc.internal.security_connector=0x55ed3b0a72b0, grpc.internal.subchannel_pool=0x55ed3af64690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55ed3ac819a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:40:10.403562183+01:00"}), backing off for 1000 ms
00:13:09.792   22:40:10 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@109 -- # tmp1=nvmf-tcp:nqn.2016-06.io.spdk:cnode1
00:13:09.792    22:40:10 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@111 -- # rpc_cmd nvmf_get_subsystems
00:13:09.792    22:40:10 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@111 -- # jq -r '. | length'
00:13:09.792    22:40:10 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:09.792    22:40:10 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:09.792    22:40:10 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:09.792   22:40:10 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@111 -- # [[ 3 -eq 3 ]]
00:13:09.792   22:40:10 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@112 -- # [[ nvmf-tcp:nqn.2016-06.io.spdk:cnode0 == \n\v\m\f\-\t\c\p\:\n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]]
00:13:09.792   22:40:10 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@113 -- # [[ nvmf-tcp:nqn.2016-06.io.spdk:cnode1 == \n\v\m\f\-\t\c\p\:\n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]]
00:13:09.792   22:40:10 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@116 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:13:09.792   22:40:10 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:10.051  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:10.051  I0000 00:00:1733866810.678235  133380 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:10.051  I0000 00:00:1733866810.679986  133380 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:10.051  I0000 00:00:1733866810.681286  133387 subchannel.cc:806] subchannel 0x5636bb3a6de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5636bb246840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5636bb3c0da0, grpc.internal.client_channel_call_destination=0x7fae123c3390, grpc.internal.event_engine=0x5636bb0c5030, grpc.internal.security_connector=0x5636bb24e770, grpc.internal.subchannel_pool=0x5636bb215690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5636baf329a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:40:10.680878556+01:00"}), backing off for 999 ms
00:13:10.051  {}
00:13:10.051   22:40:10 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@117 -- # NOT rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:13:10.051   22:40:10 sma.sma_nvmf_tcp -- common/autotest_common.sh@652 -- # local es=0
00:13:10.051   22:40:10 sma.sma_nvmf_tcp -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:13:10.051   22:40:10 sma.sma_nvmf_tcp -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:13:10.051   22:40:10 sma.sma_nvmf_tcp -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:10.051    22:40:10 sma.sma_nvmf_tcp -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:13:10.051   22:40:10 sma.sma_nvmf_tcp -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:10.051   22:40:10 sma.sma_nvmf_tcp -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:13:10.051   22:40:10 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:10.051   22:40:10 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:10.051  [2024-12-10 22:40:10.723550] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:cnode0' does not exist
00:13:10.051  request:
00:13:10.051  {
00:13:10.051  "nqn": "nqn.2016-06.io.spdk:cnode0",
00:13:10.051  "method": "nvmf_get_subsystems",
00:13:10.051  "req_id": 1
00:13:10.051  }
00:13:10.051  Got JSON-RPC error response
00:13:10.051  response:
00:13:10.051  {
00:13:10.051  "code": -19,
00:13:10.051  "message": "No such device"
00:13:10.051  }
00:13:10.051   22:40:10 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:13:10.051   22:40:10 sma.sma_nvmf_tcp -- common/autotest_common.sh@655 -- # es=1
00:13:10.051   22:40:10 sma.sma_nvmf_tcp -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:10.051   22:40:10 sma.sma_nvmf_tcp -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:13:10.051   22:40:10 sma.sma_nvmf_tcp -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:10.051    22:40:10 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@118 -- # rpc_cmd nvmf_get_subsystems
00:13:10.051    22:40:10 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@118 -- # jq -r '. | length'
00:13:10.051    22:40:10 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:10.051    22:40:10 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:10.051    22:40:10 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:10.051   22:40:10 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@118 -- # [[ 2 -eq 2 ]]
00:13:10.051   22:40:10 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@120 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:cnode1
00:13:10.051   22:40:10 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:10.310  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:10.310  I0000 00:00:1733866810.978541  133411 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:10.310  I0000 00:00:1733866810.980240  133411 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:10.311  I0000 00:00:1733866810.981578  133550 subchannel.cc:806] subchannel 0x5600c4588de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5600c4428840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5600c45a2da0, grpc.internal.client_channel_call_destination=0x7f9f50be2390, grpc.internal.event_engine=0x5600c42a7030, grpc.internal.security_connector=0x5600c4430770, grpc.internal.subchannel_pool=0x5600c43f7690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5600c41149a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:40:10.981010811+01:00"}), backing off for 1000 ms
00:13:10.311  {}
00:13:10.311   22:40:11 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@121 -- # NOT rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1
00:13:10.311   22:40:11 sma.sma_nvmf_tcp -- common/autotest_common.sh@652 -- # local es=0
00:13:10.311   22:40:11 sma.sma_nvmf_tcp -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1
00:13:10.311   22:40:11 sma.sma_nvmf_tcp -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:13:10.311   22:40:11 sma.sma_nvmf_tcp -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:10.311    22:40:11 sma.sma_nvmf_tcp -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:13:10.311   22:40:11 sma.sma_nvmf_tcp -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:10.311   22:40:11 sma.sma_nvmf_tcp -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1
00:13:10.311   22:40:11 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:10.311   22:40:11 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:10.311  [2024-12-10 22:40:11.024432] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:cnode1' does not exist
00:13:10.311  request:
00:13:10.311  {
00:13:10.311  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:13:10.311  "method": "nvmf_get_subsystems",
00:13:10.311  "req_id": 1
00:13:10.311  }
00:13:10.311  Got JSON-RPC error response
00:13:10.311  response:
00:13:10.311  {
00:13:10.311  "code": -19,
00:13:10.311  "message": "No such device"
00:13:10.311  }
00:13:10.311   22:40:11 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:13:10.311   22:40:11 sma.sma_nvmf_tcp -- common/autotest_common.sh@655 -- # es=1
00:13:10.311   22:40:11 sma.sma_nvmf_tcp -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:10.311   22:40:11 sma.sma_nvmf_tcp -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:13:10.311   22:40:11 sma.sma_nvmf_tcp -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:10.311    22:40:11 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@122 -- # rpc_cmd nvmf_get_subsystems
00:13:10.311    22:40:11 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@122 -- # jq -r '. | length'
00:13:10.311    22:40:11 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:10.311    22:40:11 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:10.311    22:40:11 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:10.311   22:40:11 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@122 -- # [[ 1 -eq 1 ]]
00:13:10.311   22:40:11 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@125 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:13:10.311   22:40:11 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:10.569  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:10.569  I0000 00:00:1733866811.296414  133629 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:10.569  I0000 00:00:1733866811.298282  133629 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:10.569  I0000 00:00:1733866811.299629  133633 subchannel.cc:806] subchannel 0x5610ad851de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5610ad6f1840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5610ad86bda0, grpc.internal.client_channel_call_destination=0x7f3c4a7ad390, grpc.internal.event_engine=0x5610ad570030, grpc.internal.security_connector=0x5610ad6f9770, grpc.internal.subchannel_pool=0x5610ad6c0690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5610ad3dd9a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:40:11.299134272+01:00"}), backing off for 1000 ms
00:13:10.569  {}
00:13:10.569   22:40:11 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@126 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:cnode1
00:13:10.569   22:40:11 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:10.828  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:10.828  I0000 00:00:1733866811.510466  133653 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:10.828  I0000 00:00:1733866811.511905  133653 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:10.828  I0000 00:00:1733866811.513041  133654 subchannel.cc:806] subchannel 0x55e6f2a25de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55e6f28c5840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55e6f2a3fda0, grpc.internal.client_channel_call_destination=0x7f6bfe8fb390, grpc.internal.event_engine=0x55e6f2744030, grpc.internal.security_connector=0x55e6f28cd770, grpc.internal.subchannel_pool=0x55e6f2894690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55e6f25b19a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:40:11.512620541+01:00"}), backing off for 1000 ms
00:13:10.828  {}
00:13:10.828    22:40:11 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@129 -- # jq -r .handle
00:13:10.828    22:40:11 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@129 -- # create_device nqn.2016-06.io.spdk:cnode0
00:13:10.828    22:40:11 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:11.087  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:11.087  I0000 00:00:1733866811.740533  133677 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:11.087  I0000 00:00:1733866811.742266  133677 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:11.088  I0000 00:00:1733866811.743521  133684 subchannel.cc:806] subchannel 0x56108a0b1de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x561089f51840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x56108a0cbda0, grpc.internal.client_channel_call_destination=0x7f0040e20390, grpc.internal.event_engine=0x561089f3e490, grpc.internal.security_connector=0x56108a0632b0, grpc.internal.subchannel_pool=0x561089f20690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x561089c3d9a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:40:11.743009614+01:00"}), backing off for 1000 ms
00:13:11.088  [2024-12-10 22:40:11.762999] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:13:11.088   22:40:11 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@129 -- # devid0=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:13:11.088    22:40:11 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@130 -- # create_device nqn.2016-06.io.spdk:cnode1
00:13:11.088    22:40:11 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@130 -- # jq -r .handle
00:13:11.088    22:40:11 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:11.347  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:11.347  I0000 00:00:1733866811.994094  133707 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:11.347  I0000 00:00:1733866811.995822  133707 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:11.347  I0000 00:00:1733866811.997127  133710 subchannel.cc:806] subchannel 0x56404f917de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x56404f7b7840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x56404f931da0, grpc.internal.client_channel_call_destination=0x7f985c9ce390, grpc.internal.event_engine=0x56404f7a4490, grpc.internal.security_connector=0x56404f8c92b0, grpc.internal.subchannel_pool=0x56404f786690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x56404f4a39a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:40:11.996621421+01:00"}), backing off for 1000 ms
00:13:11.347   22:40:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@130 -- # devid1=nvmf-tcp:nqn.2016-06.io.spdk:cnode1
00:13:11.347    22:40:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@131 -- # rpc_cmd bdev_get_bdevs -b null0
00:13:11.347    22:40:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:11.347    22:40:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:11.347    22:40:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@131 -- # jq -r '.[].uuid'
00:13:11.347    22:40:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:11.347   22:40:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@131 -- # uuid=62b5de99-f811-4b45-9f7d-3199c78bb1bf
00:13:11.347   22:40:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@134 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 62b5de99-f811-4b45-9f7d-3199c78bb1bf
00:13:11.347   22:40:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@45 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:11.347    22:40:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@45 -- # uuid2base64 62b5de99-f811-4b45-9f7d-3199c78bb1bf
00:13:11.347    22:40:12 sma.sma_nvmf_tcp -- sma/common.sh@20 -- # python
00:13:11.605  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:11.605  I0000 00:00:1733866812.369316  133772 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:11.605  I0000 00:00:1733866812.371060  133772 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:11.605  I0000 00:00:1733866812.372415  133935 subchannel.cc:806] subchannel 0x559d5025fde0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x559d500ff840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x559d50279da0, grpc.internal.client_channel_call_destination=0x7ffab4058390, grpc.internal.event_engine=0x559d4ff7e030, grpc.internal.security_connector=0x559d502112b0, grpc.internal.subchannel_pool=0x559d500ce690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x559d4fdeb9a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:40:12.371976287+01:00"}), backing off for 1000 ms
00:13:11.863  {}
00:13:11.863    22:40:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@135 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:13:11.863    22:40:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@135 -- # jq -r '.[0].namespaces | length'
00:13:11.863    22:40:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:11.863    22:40:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:11.863    22:40:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:11.863   22:40:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@135 -- # [[ 1 -eq 1 ]]
00:13:11.863    22:40:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@136 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1
00:13:11.863    22:40:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:11.863    22:40:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:11.863    22:40:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@136 -- # jq -r '.[0].namespaces | length'
00:13:11.863    22:40:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:11.863   22:40:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@136 -- # [[ 0 -eq 0 ]]
00:13:11.863    22:40:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@137 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:13:11.863    22:40:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@137 -- # jq -r '.[0].namespaces[0].uuid'
00:13:11.863    22:40:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:11.863    22:40:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:11.863    22:40:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:11.863   22:40:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@137 -- # [[ 62b5de99-f811-4b45-9f7d-3199c78bb1bf == \6\2\b\5\d\e\9\9\-\f\8\1\1\-\4\b\4\5\-\9\f\7\d\-\3\1\9\9\c\7\8\b\b\1\b\f ]]
00:13:11.863   22:40:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@140 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 62b5de99-f811-4b45-9f7d-3199c78bb1bf
00:13:11.863   22:40:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@45 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:11.863    22:40:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@45 -- # uuid2base64 62b5de99-f811-4b45-9f7d-3199c78bb1bf
00:13:11.863    22:40:12 sma.sma_nvmf_tcp -- sma/common.sh@20 -- # python
00:13:12.122  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:12.122  I0000 00:00:1733866812.831117  133964 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:12.122  I0000 00:00:1733866812.833041  133964 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:12.122  I0000 00:00:1733866812.834428  133974 subchannel.cc:806] subchannel 0x556a97463de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x556a97303840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x556a9747dda0, grpc.internal.client_channel_call_destination=0x7fa5f7284390, grpc.internal.event_engine=0x556a97182030, grpc.internal.security_connector=0x556a974152b0, grpc.internal.subchannel_pool=0x556a972d2690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x556a96fef9a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:40:12.833983863+01:00"}), backing off for 1000 ms
00:13:12.122  {}
00:13:12.122    22:40:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@141 -- # jq -r '.[0].namespaces | length'
00:13:12.122    22:40:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@141 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:13:12.122    22:40:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:12.122    22:40:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:12.122    22:40:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:12.381   22:40:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@141 -- # [[ 1 -eq 1 ]]
00:13:12.381    22:40:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@142 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1
00:13:12.381    22:40:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:12.381    22:40:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@142 -- # jq -r '.[0].namespaces | length'
00:13:12.381    22:40:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:12.381    22:40:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:12.381   22:40:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@142 -- # [[ 0 -eq 0 ]]
00:13:12.381    22:40:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@143 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:13:12.381    22:40:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:12.381    22:40:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:12.381    22:40:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@143 -- # jq -r '.[0].namespaces[0].uuid'
00:13:12.381    22:40:12 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:12.381   22:40:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@143 -- # [[ 62b5de99-f811-4b45-9f7d-3199c78bb1bf == \6\2\b\5\d\e\9\9\-\f\8\1\1\-\4\b\4\5\-\9\f\7\d\-\3\1\9\9\c\7\8\b\b\1\b\f ]]
00:13:12.381   22:40:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@146 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 62b5de99-f811-4b45-9f7d-3199c78bb1bf
00:13:12.381   22:40:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@59 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:12.381    22:40:12 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@59 -- # uuid2base64 62b5de99-f811-4b45-9f7d-3199c78bb1bf
00:13:12.381    22:40:12 sma.sma_nvmf_tcp -- sma/common.sh@20 -- # python
00:13:12.641  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:12.641  I0000 00:00:1733866813.203563  134003 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:12.641  I0000 00:00:1733866813.205158  134003 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:12.641  I0000 00:00:1733866813.206401  134006 subchannel.cc:806] subchannel 0x561a15d9bde0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x561a15c3b840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x561a15db5da0, grpc.internal.client_channel_call_destination=0x7f391d36b390, grpc.internal.event_engine=0x561a15c28490, grpc.internal.security_connector=0x561a15d4d2b0, grpc.internal.subchannel_pool=0x561a15c0a690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x561a159279a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:40:13.205977895+01:00"}), backing off for 1000 ms
00:13:12.641  {}
00:13:12.641    22:40:13 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@147 -- # jq -r '.[0].namespaces | length'
00:13:12.641    22:40:13 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@147 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:13:12.641    22:40:13 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:12.641    22:40:13 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:12.641    22:40:13 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:12.641   22:40:13 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@147 -- # [[ 0 -eq 0 ]]
00:13:12.641    22:40:13 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@148 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1
00:13:12.641    22:40:13 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:12.641    22:40:13 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:12.641    22:40:13 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@148 -- # jq -r '.[0].namespaces | length'
00:13:12.641    22:40:13 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:12.641   22:40:13 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@148 -- # [[ 0 -eq 0 ]]
00:13:12.641   22:40:13 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@151 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 62b5de99-f811-4b45-9f7d-3199c78bb1bf
00:13:12.641   22:40:13 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@59 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:12.641    22:40:13 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@59 -- # uuid2base64 62b5de99-f811-4b45-9f7d-3199c78bb1bf
00:13:12.641    22:40:13 sma.sma_nvmf_tcp -- sma/common.sh@20 -- # python
00:13:12.900  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:12.900  I0000 00:00:1733866813.577287  134137 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:12.900  I0000 00:00:1733866813.579029  134137 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:12.900  I0000 00:00:1733866813.580371  134230 subchannel.cc:806] subchannel 0x55a6b79aede0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55a6b784e840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55a6b79c8da0, grpc.internal.client_channel_call_destination=0x7f00e013d390, grpc.internal.event_engine=0x55a6b783b490, grpc.internal.security_connector=0x55a6b79602b0, grpc.internal.subchannel_pool=0x55a6b781d690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55a6b753a9a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:40:13.579911288+01:00"}), backing off for 999 ms
00:13:12.900  {}
00:13:12.900   22:40:13 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@153 -- # cleanup
00:13:12.900   22:40:13 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@13 -- # killprocess 132630
00:13:12.900   22:40:13 sma.sma_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 132630 ']'
00:13:12.900   22:40:13 sma.sma_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 132630
00:13:12.900    22:40:13 sma.sma_nvmf_tcp -- common/autotest_common.sh@959 -- # uname
00:13:12.900   22:40:13 sma.sma_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:13:12.900    22:40:13 sma.sma_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 132630
00:13:12.900   22:40:13 sma.sma_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:13:12.900   22:40:13 sma.sma_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:13:12.900   22:40:13 sma.sma_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 132630'
00:13:12.900  killing process with pid 132630
00:13:12.900   22:40:13 sma.sma_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 132630
00:13:12.900   22:40:13 sma.sma_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 132630
00:13:16.189   22:40:16 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@14 -- # killprocess 132631
00:13:16.189   22:40:16 sma.sma_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 132631 ']'
00:13:16.189   22:40:16 sma.sma_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 132631
00:13:16.189    22:40:16 sma.sma_nvmf_tcp -- common/autotest_common.sh@959 -- # uname
00:13:16.189   22:40:16 sma.sma_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:13:16.189    22:40:16 sma.sma_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 132631
00:13:16.189   22:40:16 sma.sma_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=python3
00:13:16.189   22:40:16 sma.sma_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:13:16.189   22:40:16 sma.sma_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 132631'
00:13:16.189  killing process with pid 132631
00:13:16.189   22:40:16 sma.sma_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 132631
00:13:16.189   22:40:16 sma.sma_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 132631
00:13:16.189   22:40:16 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@154 -- # trap - SIGINT SIGTERM EXIT
00:13:16.189  
00:13:16.189  real	0m9.306s
00:13:16.189  user	0m12.709s
00:13:16.189  sys	0m1.264s
00:13:16.189   22:40:16 sma.sma_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:16.189   22:40:16 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:13:16.189  ************************************
00:13:16.189  END TEST sma_nvmf_tcp
00:13:16.189  ************************************
00:13:16.189   22:40:16 sma -- sma/sma.sh@12 -- # run_test sma_vfiouser_qemu /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/vfiouser_qemu.sh
00:13:16.189   22:40:16 sma -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:16.189   22:40:16 sma -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:16.189   22:40:16 sma -- common/autotest_common.sh@10 -- # set +x
00:13:16.189  ************************************
00:13:16.189  START TEST sma_vfiouser_qemu
00:13:16.189  ************************************
00:13:16.189   22:40:16 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/vfiouser_qemu.sh
00:13:16.189  * Looking for test storage...
00:13:16.189  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:13:16.189    22:40:16 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:13:16.189     22:40:16 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:13:16.189     22:40:16 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1711 -- # lcov --version
00:13:16.189    22:40:16 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:13:16.189    22:40:16 sma.sma_vfiouser_qemu -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:13:16.189    22:40:16 sma.sma_vfiouser_qemu -- scripts/common.sh@333 -- # local ver1 ver1_l
00:13:16.189    22:40:16 sma.sma_vfiouser_qemu -- scripts/common.sh@334 -- # local ver2 ver2_l
00:13:16.189    22:40:16 sma.sma_vfiouser_qemu -- scripts/common.sh@336 -- # IFS=.-:
00:13:16.189    22:40:16 sma.sma_vfiouser_qemu -- scripts/common.sh@336 -- # read -ra ver1
00:13:16.189    22:40:16 sma.sma_vfiouser_qemu -- scripts/common.sh@337 -- # IFS=.-:
00:13:16.189    22:40:16 sma.sma_vfiouser_qemu -- scripts/common.sh@337 -- # read -ra ver2
00:13:16.189    22:40:16 sma.sma_vfiouser_qemu -- scripts/common.sh@338 -- # local 'op=<'
00:13:16.189    22:40:16 sma.sma_vfiouser_qemu -- scripts/common.sh@340 -- # ver1_l=2
00:13:16.189    22:40:16 sma.sma_vfiouser_qemu -- scripts/common.sh@341 -- # ver2_l=1
00:13:16.189    22:40:16 sma.sma_vfiouser_qemu -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:13:16.189    22:40:16 sma.sma_vfiouser_qemu -- scripts/common.sh@344 -- # case "$op" in
00:13:16.189    22:40:16 sma.sma_vfiouser_qemu -- scripts/common.sh@345 -- # : 1
00:13:16.189    22:40:16 sma.sma_vfiouser_qemu -- scripts/common.sh@364 -- # (( v = 0 ))
00:13:16.189    22:40:16 sma.sma_vfiouser_qemu -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:13:16.189     22:40:16 sma.sma_vfiouser_qemu -- scripts/common.sh@365 -- # decimal 1
00:13:16.189     22:40:16 sma.sma_vfiouser_qemu -- scripts/common.sh@353 -- # local d=1
00:13:16.189     22:40:16 sma.sma_vfiouser_qemu -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:16.189     22:40:16 sma.sma_vfiouser_qemu -- scripts/common.sh@355 -- # echo 1
00:13:16.189    22:40:16 sma.sma_vfiouser_qemu -- scripts/common.sh@365 -- # ver1[v]=1
00:13:16.189     22:40:16 sma.sma_vfiouser_qemu -- scripts/common.sh@366 -- # decimal 2
00:13:16.189     22:40:16 sma.sma_vfiouser_qemu -- scripts/common.sh@353 -- # local d=2
00:13:16.189     22:40:16 sma.sma_vfiouser_qemu -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:13:16.189     22:40:16 sma.sma_vfiouser_qemu -- scripts/common.sh@355 -- # echo 2
00:13:16.189    22:40:16 sma.sma_vfiouser_qemu -- scripts/common.sh@366 -- # ver2[v]=2
00:13:16.189    22:40:16 sma.sma_vfiouser_qemu -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:13:16.189    22:40:16 sma.sma_vfiouser_qemu -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:13:16.189    22:40:16 sma.sma_vfiouser_qemu -- scripts/common.sh@368 -- # return 0
00:13:16.189    22:40:16 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:13:16.189    22:40:16 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:13:16.189  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:16.189  		--rc genhtml_branch_coverage=1
00:13:16.189  		--rc genhtml_function_coverage=1
00:13:16.189  		--rc genhtml_legend=1
00:13:16.189  		--rc geninfo_all_blocks=1
00:13:16.189  		--rc geninfo_unexecuted_blocks=1
00:13:16.189  		
00:13:16.189  		'
00:13:16.189    22:40:16 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:13:16.189  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:16.189  		--rc genhtml_branch_coverage=1
00:13:16.189  		--rc genhtml_function_coverage=1
00:13:16.189  		--rc genhtml_legend=1
00:13:16.189  		--rc geninfo_all_blocks=1
00:13:16.189  		--rc geninfo_unexecuted_blocks=1
00:13:16.189  		
00:13:16.189  		'
00:13:16.189    22:40:16 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:13:16.189  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:16.189  		--rc genhtml_branch_coverage=1
00:13:16.189  		--rc genhtml_function_coverage=1
00:13:16.189  		--rc genhtml_legend=1
00:13:16.189  		--rc geninfo_all_blocks=1
00:13:16.189  		--rc geninfo_unexecuted_blocks=1
00:13:16.189  		
00:13:16.189  		'
00:13:16.189    22:40:16 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:13:16.189  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:16.189  		--rc genhtml_branch_coverage=1
00:13:16.189  		--rc genhtml_function_coverage=1
00:13:16.189  		--rc genhtml_legend=1
00:13:16.189  		--rc geninfo_all_blocks=1
00:13:16.189  		--rc geninfo_unexecuted_blocks=1
00:13:16.189  		
00:13:16.189  		'
00:13:16.189   22:40:16 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh
00:13:16.189    22:40:16 sma.sma_vfiouser_qemu -- vfio_user/common.sh@6 -- # : 128
00:13:16.189    22:40:16 sma.sma_vfiouser_qemu -- vfio_user/common.sh@7 -- # : 512
00:13:16.189    22:40:16 sma.sma_vfiouser_qemu -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:13:16.189     22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@6 -- # : false
00:13:16.189     22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@7 -- # : /root/vhost_test
00:13:16.189     22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@8 -- # : /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:13:16.189     22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@9 -- # : qemu-img
00:13:16.189      22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:13:16.189     22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:13:16.189     22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:13:16.189     22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:13:16.189     22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:13:16.190     22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:13:16.190     22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:13:16.190       22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/vfiouser_qemu.sh
00:13:16.190      22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:13:16.190     22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:13:16.190     22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:13:16.190     22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:13:16.190     22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:13:16.190     22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:13:16.190     22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:13:16.190      22:40:16 sma.sma_vfiouser_qemu -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:13:16.190      22:40:16 sma.sma_vfiouser_qemu -- common/autotest.config@2 -- # vhost_0_main_core=0
00:13:16.190      22:40:16 sma.sma_vfiouser_qemu -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:13:16.190      22:40:16 sma.sma_vfiouser_qemu -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:13:16.190      22:40:16 sma.sma_vfiouser_qemu -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:13:16.190      22:40:16 sma.sma_vfiouser_qemu -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:13:16.190      22:40:16 sma.sma_vfiouser_qemu -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:13:16.190      22:40:16 sma.sma_vfiouser_qemu -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:13:16.190      22:40:16 sma.sma_vfiouser_qemu -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:13:16.190      22:40:16 sma.sma_vfiouser_qemu -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:13:16.190      22:40:16 sma.sma_vfiouser_qemu -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:13:16.190      22:40:16 sma.sma_vfiouser_qemu -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:13:16.190      22:40:16 sma.sma_vfiouser_qemu -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:13:16.190      22:40:16 sma.sma_vfiouser_qemu -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:13:16.190      22:40:16 sma.sma_vfiouser_qemu -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:13:16.190      22:40:16 sma.sma_vfiouser_qemu -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:13:16.190      22:40:16 sma.sma_vfiouser_qemu -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:13:16.190      22:40:16 sma.sma_vfiouser_qemu -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:13:16.190      22:40:16 sma.sma_vfiouser_qemu -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:13:16.190      22:40:16 sma.sma_vfiouser_qemu -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:13:16.190      22:40:16 sma.sma_vfiouser_qemu -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:13:16.190      22:40:16 sma.sma_vfiouser_qemu -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:13:16.190      22:40:16 sma.sma_vfiouser_qemu -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:13:16.190      22:40:16 sma.sma_vfiouser_qemu -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:13:16.190      22:40:16 sma.sma_vfiouser_qemu -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:13:16.190      22:40:16 sma.sma_vfiouser_qemu -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:13:16.190     22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:13:16.190      22:40:16 sma.sma_vfiouser_qemu -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:13:16.190      22:40:16 sma.sma_vfiouser_qemu -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:13:16.190      22:40:16 sma.sma_vfiouser_qemu -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:13:16.190      22:40:16 sma.sma_vfiouser_qemu -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:13:16.190      22:40:16 sma.sma_vfiouser_qemu -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:13:16.190      22:40:16 sma.sma_vfiouser_qemu -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:13:16.190       22:40:16 sma.sma_vfiouser_qemu -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:13:16.190        22:40:16 sma.sma_vfiouser_qemu -- scheduler/cgroups.sh@244 -- # check_cgroup
00:13:16.190        22:40:16 sma.sma_vfiouser_qemu -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:13:16.190        22:40:16 sma.sma_vfiouser_qemu -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:13:16.190        22:40:16 sma.sma_vfiouser_qemu -- scheduler/cgroups.sh@10 -- # echo 2
00:13:16.190       22:40:16 sma.sma_vfiouser_qemu -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:13:16.190    22:40:16 sma.sma_vfiouser_qemu -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:13:16.190    22:40:16 sma.sma_vfiouser_qemu -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]]
00:13:16.190    22:40:16 sma.sma_vfiouser_qemu -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@104 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@107 -- # VM_PASSWORD=root
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@108 -- # vm_no=0
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@110 -- # VFO_ROOT_PATH=/tmp/sma/vfio-user/qemu
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@112 -- # '[' -e /tmp/sma/vfio-user/qemu ']'
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@113 -- # mkdir -p /tmp/sma/vfio-user/qemu
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@116 -- # used_vms=0
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@117 -- # vm_kill_all
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@476 -- # local vm
00:13:16.190    22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@477 -- # vm_list_all
00:13:16.190    22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@466 -- # vms=()
00:13:16.190    22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@466 -- # local vms
00:13:16.190    22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:13:16.190    22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:13:16.190    22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@477 -- # for vm in $(vm_list_all)
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@478 -- # vm_kill 1
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@442 -- # vm_num_is_valid 1
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@443 -- # local vm_dir=/root/vhost_test/vms/1
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@445 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@446 -- # return 0
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@481 -- # rm -rf /root/vhost_test/vms
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@119 -- # vm_setup --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disk-type=virtio --force=0 '--qemu-args=-qmp tcp:localhost:10005,server,nowait -device pci-bridge,chassis_nr=1,id=pci.spdk.0 -device pci-bridge,chassis_nr=2,id=pci.spdk.1'
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@518 -- # xtrace_disable
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:16.190  INFO: Creating new VM in /root/vhost_test/vms/0
00:13:16.190  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:13:16.190  INFO: TASK MASK: 1-2
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@671 -- # local node_num=0
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@672 -- # local boot_disk_present=false
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out=
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=INFO
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:13:16.190  INFO: NUMA NODE: 0
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@677 -- # [[ -n '' ]]
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@686 -- # [[ -z '' ]]
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@691 -- # (( 0 == 0 ))
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@691 -- # [[ virtio == virtio* ]]
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@692 -- # disks=("default_virtio.img")
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@701 -- # IFS=,
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@701 -- # read -r disk disk_type _
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@702 -- # [[ -z '' ]]
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@702 -- # disk_type=virtio
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@704 -- # case $disk_type in
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@706 -- # local raw_name=RAWSCSI
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@707 -- # local raw_disk=/root/vhost_test/vms/0/test.img
00:13:16.190   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@710 -- # [[ -f default_virtio.img ]]
00:13:16.191   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@714 -- # notice 'Creating Virtio disc /root/vhost_test/vms/0/test.img'
00:13:16.191   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@94 -- # message INFO 'Creating Virtio disc /root/vhost_test/vms/0/test.img'
00:13:16.191   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out
00:13:16.191   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false
00:13:16.191   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out=
00:13:16.191   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=INFO
00:13:16.191   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift
00:13:16.191   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'INFO: Creating Virtio disc /root/vhost_test/vms/0/test.img'
00:13:16.191  INFO: Creating Virtio disc /root/vhost_test/vms/0/test.img
00:13:16.191   22:40:16 sma.sma_vfiouser_qemu -- vhost/common.sh@715 -- # dd if=/dev/zero of=/root/vhost_test/vms/0/test.img bs=1024k count=1024
00:13:16.449  1024+0 records in
00:13:16.449  1024+0 records out
00:13:16.449  1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.58423 s, 1.8 GB/s
00:13:16.449   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@718 -- # cmd+=(-device "virtio-scsi-pci,num_queues=$queue_number")
00:13:16.449   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@719 -- # cmd+=(-device "scsi-hd,drive=hd$i,vendor=$raw_name")
00:13:16.449   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@720 -- # cmd+=(-drive "if=none,id=hd$i,file=$raw_disk,format=raw$raw_cache")
00:13:16.449   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@780 -- # [[ -n '' ]]
00:13:16.449   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@785 -- # (( 1 ))
00:13:16.449   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@785 -- # cmd+=("${qemu_args[@]}")
00:13:16.449   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/0/run.sh'
00:13:16.449   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/0/run.sh'
00:13:16.449   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out
00:13:16.449   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false
00:13:16.449   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out=
00:13:16.449   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=INFO
00:13:16.449   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift
00:13:16.449   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/0/run.sh'
00:13:16.449  INFO: Saving to /root/vhost_test/vms/0/run.sh
00:13:16.449   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@787 -- # cat
00:13:16.449    22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 1-2 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :100 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10002,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/0/qemu.pid -serial file:/root/vhost_test/vms/0/serial.log -D /root/vhost_test/vms/0/qemu.log -chardev file,path=/root/vhost_test/vms/0/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10000-:22,hostfwd=tcp::10001-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device virtio-scsi-pci,num_queues=2 -device scsi-hd,drive=hd,vendor=RAWSCSI -drive if=none,id=hd,file=/root/vhost_test/vms/0/test.img,format=raw '-qmp tcp:localhost:10005,server,nowait -device pci-bridge,chassis_nr=1,id=pci.spdk.0 -device pci-bridge,chassis_nr=2,id=pci.spdk.1'
00:13:16.449   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/0/run.sh
00:13:16.449   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@827 -- # echo 10000
00:13:16.449   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@828 -- # echo 10001
00:13:16.449   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@829 -- # echo 10002
00:13:16.449   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/0/migration_port
00:13:16.449   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@832 -- # [[ -z '' ]]
00:13:16.449   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@834 -- # echo 10004
00:13:16.449   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@835 -- # echo 100
00:13:16.449   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@837 -- # [[ -z '' ]]
00:13:16.449   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@838 -- # [[ -z '' ]]
00:13:16.449   22:40:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@124 -- # vm_run 0
00:13:16.449   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:13:16.449   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@843 -- # local run_all=false
00:13:16.449   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@844 -- # local vms_to_run=
00:13:16.449   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@846 -- # getopts a-: optchar
00:13:16.449   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@856 -- # false
00:13:16.449   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@859 -- # shift 0
00:13:16.449   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@860 -- # for vm in "$@"
00:13:16.449   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@861 -- # vm_num_is_valid 0
00:13:16.449   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:16.449   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:16.450   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/0/run.sh ]]
00:13:16.450   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@866 -- # vms_to_run+=' 0'
00:13:16.450   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:13:16.450   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@871 -- # vm_is_running 0
00:13:16.450   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:13:16.450   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:16.450   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:16.450   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:13:16.450   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:13:16.450   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@373 -- # return 1
00:13:16.450   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/0/run.sh'
00:13:16.450   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/0/run.sh'
00:13:16.450   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out
00:13:16.450   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false
00:13:16.450   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out=
00:13:16.450   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=INFO
00:13:16.450   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift
00:13:16.450   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/0/run.sh'
00:13:16.450  INFO: running /root/vhost_test/vms/0/run.sh
00:13:16.450   22:40:17 sma.sma_vfiouser_qemu -- vhost/common.sh@877 -- # /root/vhost_test/vms/0/run.sh
00:13:16.450  Running VM in /root/vhost_test/vms/0
00:13:17.017  Waiting for QEMU pid file
00:13:17.952  === qemu.log ===
00:13:17.952  === qemu.log ===
00:13:17.952   22:40:18 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@125 -- # vm_wait_for_boot 300 0
00:13:17.952   22:40:18 sma.sma_vfiouser_qemu -- vhost/common.sh@913 -- # assert_number 300
00:13:17.952   22:40:18 sma.sma_vfiouser_qemu -- vhost/common.sh@281 -- # [[ 300 =~ [0-9]+ ]]
00:13:17.952   22:40:18 sma.sma_vfiouser_qemu -- vhost/common.sh@281 -- # return 0
00:13:17.952   22:40:18 sma.sma_vfiouser_qemu -- vhost/common.sh@915 -- # xtrace_disable
00:13:17.952   22:40:18 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:17.952  INFO: Waiting for VMs to boot
00:13:17.952  INFO: waiting for VM0 (/root/vhost_test/vms/0)
00:13:39.884  
00:13:39.884  INFO: VM0 ready
00:13:39.884  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:13:39.884  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:13:39.884  INFO: all VMs ready
00:13:39.884   22:40:39 sma.sma_vfiouser_qemu -- vhost/common.sh@973 -- # return 0
00:13:39.884   22:40:39 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@129 -- # tgtpid=138947
00:13:39.884   22:40:39 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@128 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc
00:13:39.884   22:40:39 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@130 -- # waitforlisten 138947
00:13:39.884   22:40:39 sma.sma_vfiouser_qemu -- common/autotest_common.sh@835 -- # '[' -z 138947 ']'
00:13:39.884   22:40:39 sma.sma_vfiouser_qemu -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:39.884   22:40:39 sma.sma_vfiouser_qemu -- common/autotest_common.sh@840 -- # local max_retries=100
00:13:39.884   22:40:39 sma.sma_vfiouser_qemu -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:39.884  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:39.884   22:40:39 sma.sma_vfiouser_qemu -- common/autotest_common.sh@844 -- # xtrace_disable
00:13:39.884   22:40:39 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:39.884  [2024-12-10 22:40:39.955398] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:13:39.884  [2024-12-10 22:40:39.955503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138947 ]
00:13:39.884  EAL: No free 2048 kB hugepages reported on node 1
00:13:39.884  [2024-12-10 22:40:40.088332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:39.884  [2024-12-10 22:40:40.227200] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:13:40.143   22:40:40 sma.sma_vfiouser_qemu -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:13:40.143   22:40:40 sma.sma_vfiouser_qemu -- common/autotest_common.sh@868 -- # return 0
00:13:40.143   22:40:40 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@133 -- # rpc_cmd dpdk_cryptodev_scan_accel_module
00:13:40.143   22:40:40 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:40.143   22:40:40 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:40.143   22:40:40 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:40.143   22:40:40 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@134 -- # rpc_cmd dpdk_cryptodev_set_driver -d crypto_aesni_mb
00:13:40.143   22:40:40 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:40.143   22:40:40 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:40.143  [2024-12-10 22:40:40.829530] accel_dpdk_cryptodev.c: 224:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb
00:13:40.143   22:40:40 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:40.143   22:40:40 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@135 -- # rpc_cmd accel_assign_opc -o encrypt -m dpdk_cryptodev
00:13:40.143   22:40:40 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:40.143   22:40:40 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:40.143  [2024-12-10 22:40:40.837534] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev
00:13:40.143   22:40:40 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:40.143   22:40:40 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@136 -- # rpc_cmd accel_assign_opc -o decrypt -m dpdk_cryptodev
00:13:40.143   22:40:40 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:40.143   22:40:40 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:40.143  [2024-12-10 22:40:40.845570] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev
00:13:40.143   22:40:40 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:40.143   22:40:40 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@137 -- # rpc_cmd framework_start_init
00:13:40.143   22:40:40 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:40.143   22:40:40 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:40.402  [2024-12-10 22:40:41.147213] accel_dpdk_cryptodev.c:1179:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 1
00:13:41.337   22:40:41 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:41.337   22:40:41 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@140 -- # rpc_cmd bdev_null_create null0 100 4096
00:13:41.337   22:40:41 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:41.338   22:40:41 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:41.338  null0
00:13:41.338   22:40:41 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:41.338   22:40:41 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@141 -- # rpc_cmd bdev_null_create null1 100 4096
00:13:41.338   22:40:41 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:41.338   22:40:41 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:41.338  null1
00:13:41.338   22:40:41 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:41.338   22:40:41 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@160 -- # smapid=139180
00:13:41.338   22:40:41 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@163 -- # sma_waitforlisten
00:13:41.338   22:40:41 sma.sma_vfiouser_qemu -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:13:41.338   22:40:41 sma.sma_vfiouser_qemu -- sma/common.sh@8 -- # local sma_port=8080
00:13:41.338   22:40:41 sma.sma_vfiouser_qemu -- sma/common.sh@10 -- # (( i = 0 ))
00:13:41.338   22:40:41 sma.sma_vfiouser_qemu -- sma/common.sh@10 -- # (( i < 5 ))
00:13:41.338   22:40:41 sma.sma_vfiouser_qemu -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:13:41.338   22:40:41 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@144 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:13:41.338    22:40:41 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@144 -- # cat
00:13:41.338   22:40:41 sma.sma_vfiouser_qemu -- sma/common.sh@14 -- # sleep 1s
00:13:41.596  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:41.596  I0000 00:00:1733866842.217845  139180 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:42.165   22:40:42 sma.sma_vfiouser_qemu -- sma/common.sh@10 -- # (( i++ ))
00:13:42.165   22:40:42 sma.sma_vfiouser_qemu -- sma/common.sh@10 -- # (( i < 5 ))
00:13:42.165   22:40:42 sma.sma_vfiouser_qemu -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:13:42.423   22:40:42 sma.sma_vfiouser_qemu -- sma/common.sh@12 -- # return 0
00:13:42.423   22:40:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@166 -- # rpc_cmd nvmf_get_transports --trtype VFIOUSER
00:13:42.423   22:40:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:42.423   22:40:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:42.423  [
00:13:42.423  {
00:13:42.423  "trtype": "VFIOUSER",
00:13:42.423  "max_queue_depth": 256,
00:13:42.423  "max_io_qpairs_per_ctrlr": 127,
00:13:42.423  "in_capsule_data_size": 0,
00:13:42.423  "max_io_size": 131072,
00:13:42.423  "io_unit_size": 131072,
00:13:42.423  "max_aq_depth": 32,
00:13:42.423  "num_shared_buffers": 0,
00:13:42.423  "buf_cache_size": 0,
00:13:42.423  "dif_insert_or_strip": false,
00:13:42.423  "zcopy": false,
00:13:42.423  "abort_timeout_sec": 0,
00:13:42.423  "ack_timeout": 0,
00:13:42.423  "data_wr_pool_size": 0
00:13:42.423  }
00:13:42.423  ]
00:13:42.423   22:40:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:42.423   22:40:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@169 -- # vm_exec 0 '[[ ! -e /sys/class/nvme-subsystem/nvme-subsys0 ]]'
00:13:42.423   22:40:42 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:13:42.423   22:40:42 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:42.423   22:40:42 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:42.423   22:40:42 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:13:42.423   22:40:42 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:13:42.423    22:40:42 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:13:42.423    22:40:42 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:13:42.423    22:40:42 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:42.423    22:40:42 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:42.423    22:40:42 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:13:42.423    22:40:42 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:13:42.423   22:40:42 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 '[[ ! -e /sys/class/nvme-subsystem/nvme-subsys0 ]]'
00:13:42.423  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:13:42.423    22:40:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@172 -- # create_device 0 0
00:13:42.423    22:40:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@172 -- # jq -r .handle
00:13:42.423    22:40:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=0
00:13:42.423    22:40:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:13:42.423    22:40:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:42.682  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:42.682  I0000 00:00:1733866843.384027  139468 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:42.683  I0000 00:00:1733866843.385803  139468 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:42.683  [2024-12-10 22:40:43.392035] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist
00:13:42.941   22:40:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@172 -- # device0=nvme:nqn.2016-06.io.spdk:vfiouser-0
00:13:42.941   22:40:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@173 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:13:42.941   22:40:43 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:42.941   22:40:43 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:42.941  [
00:13:42.941  {
00:13:42.941  "nqn": "nqn.2016-06.io.spdk:vfiouser-0",
00:13:42.941  "subtype": "NVMe",
00:13:42.941  "listen_addresses": [
00:13:42.941  {
00:13:42.941  "trtype": "VFIOUSER",
00:13:42.941  "adrfam": "IPv4",
00:13:42.941  "traddr": "/var/tmp/vfiouser-0",
00:13:42.941  "trsvcid": ""
00:13:42.941  }
00:13:42.941  ],
00:13:42.941  "allow_any_host": true,
00:13:42.941  "hosts": [],
00:13:42.941  "serial_number": "00000000000000000000",
00:13:42.941  "model_number": "SPDK bdev Controller",
00:13:42.941  "max_namespaces": 32,
00:13:42.941  "min_cntlid": 1,
00:13:42.941  "max_cntlid": 65519,
00:13:42.941  "namespaces": []
00:13:42.941  }
00:13:42.941  ]
00:13:42.941   22:40:43 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:42.941   22:40:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@174 -- # vm_check_subsys_nqn 0 nqn.2016-06.io.spdk:vfiouser-0
00:13:42.941   22:40:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@89 -- # sleep 1
00:13:42.941  [2024-12-10 22:40:43.700378] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-0: enabling controller
00:13:43.878    22:40:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:13:43.878    22:40:44 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:13:43.878    22:40:44 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:43.878    22:40:44 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:43.878    22:40:44 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:13:43.878    22:40:44 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:13:43.878     22:40:44 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:13:43.878     22:40:44 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:13:43.878     22:40:44 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:43.878     22:40:44 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:43.878     22:40:44 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:13:43.878     22:40:44 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:13:43.878    22:40:44 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:13:43.878  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:13:44.137   22:40:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # nqn=/sys/class/nvme/nvme0/subsysnqn
00:13:44.137   22:40:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@91 -- # [[ -z /sys/class/nvme/nvme0/subsysnqn ]]
00:13:44.137    22:40:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@177 -- # rpc_cmd nvmf_get_subsystems
00:13:44.137    22:40:44 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:44.137    22:40:44 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:44.137    22:40:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@177 -- # jq -r '. | length'
00:13:44.137    22:40:44 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:44.137   22:40:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@177 -- # [[ 2 -eq 2 ]]
00:13:44.137    22:40:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@179 -- # jq -r .handle
00:13:44.137    22:40:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@179 -- # create_device 1 0
00:13:44.137    22:40:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=1
00:13:44.137    22:40:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:13:44.137    22:40:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:44.396  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:44.396  I0000 00:00:1733866844.967680  139851 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:44.396  I0000 00:00:1733866844.969557  139851 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:44.396  [2024-12-10 22:40:44.972863] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-1' does not exist
00:13:44.396   22:40:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@179 -- # device1=nvme:nqn.2016-06.io.spdk:vfiouser-1
00:13:44.396   22:40:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@180 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:13:44.396   22:40:45 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:44.396   22:40:45 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:44.396  [
00:13:44.396  {
00:13:44.396  "nqn": "nqn.2016-06.io.spdk:vfiouser-0",
00:13:44.396  "subtype": "NVMe",
00:13:44.396  "listen_addresses": [
00:13:44.396  {
00:13:44.396  "trtype": "VFIOUSER",
00:13:44.396  "adrfam": "IPv4",
00:13:44.396  "traddr": "/var/tmp/vfiouser-0",
00:13:44.396  "trsvcid": ""
00:13:44.396  }
00:13:44.396  ],
00:13:44.396  "allow_any_host": true,
00:13:44.396  "hosts": [],
00:13:44.396  "serial_number": "00000000000000000000",
00:13:44.396  "model_number": "SPDK bdev Controller",
00:13:44.396  "max_namespaces": 32,
00:13:44.396  "min_cntlid": 1,
00:13:44.396  "max_cntlid": 65519,
00:13:44.396  "namespaces": []
00:13:44.396  }
00:13:44.396  ]
00:13:44.396   22:40:45 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:44.396   22:40:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@181 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:13:44.396   22:40:45 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:44.396   22:40:45 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:44.396  [
00:13:44.396  {
00:13:44.396  "nqn": "nqn.2016-06.io.spdk:vfiouser-1",
00:13:44.396  "subtype": "NVMe",
00:13:44.396  "listen_addresses": [
00:13:44.396  {
00:13:44.396  "trtype": "VFIOUSER",
00:13:44.396  "adrfam": "IPv4",
00:13:44.396  "traddr": "/var/tmp/vfiouser-1",
00:13:44.396  "trsvcid": ""
00:13:44.396  }
00:13:44.396  ],
00:13:44.396  "allow_any_host": true,
00:13:44.396  "hosts": [],
00:13:44.396  "serial_number": "00000000000000000000",
00:13:44.396  "model_number": "SPDK bdev Controller",
00:13:44.396  "max_namespaces": 32,
00:13:44.396  "min_cntlid": 1,
00:13:44.396  "max_cntlid": 65519,
00:13:44.396  "namespaces": []
00:13:44.396  }
00:13:44.396  ]
00:13:44.396   22:40:45 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:44.396   22:40:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@182 -- # [[ nvme:nqn.2016-06.io.spdk:vfiouser-0 != \n\v\m\e\:\n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\v\f\i\o\u\s\e\r\-\1 ]]
00:13:44.397   22:40:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@183 -- # vm_check_subsys_nqn 0 nqn.2016-06.io.spdk:vfiouser-1
00:13:44.397   22:40:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@89 -- # sleep 1
00:13:44.655  [2024-12-10 22:40:45.224017] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-1: enabling controller
00:13:45.593    22:40:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:13:45.593    22:40:46 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:13:45.593    22:40:46 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:45.593    22:40:46 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:45.593    22:40:46 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:13:45.593    22:40:46 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:13:45.593     22:40:46 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:13:45.593     22:40:46 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:13:45.593     22:40:46 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:45.593     22:40:46 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:45.593     22:40:46 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:13:45.593     22:40:46 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:13:45.593    22:40:46 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:13:45.593  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:13:45.593   22:40:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # nqn=/sys/class/nvme/nvme1/subsysnqn
00:13:45.593   22:40:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@91 -- # [[ -z /sys/class/nvme/nvme1/subsysnqn ]]
00:13:45.593    22:40:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@186 -- # rpc_cmd nvmf_get_subsystems
00:13:45.593    22:40:46 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:45.593    22:40:46 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:45.593    22:40:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@186 -- # jq -r '. | length'
00:13:45.593    22:40:46 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:45.593   22:40:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@186 -- # [[ 3 -eq 3 ]]
00:13:45.593    22:40:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@190 -- # jq -r .handle
00:13:45.593    22:40:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@190 -- # create_device 0 0
00:13:45.593    22:40:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=0
00:13:45.593    22:40:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:13:45.593    22:40:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:45.851  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:45.851  I0000 00:00:1733866846.545561  140101 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:45.851  I0000 00:00:1733866846.547397  140101 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:45.852   22:40:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@190 -- # tmp0=nvme:nqn.2016-06.io.spdk:vfiouser-0
00:13:45.852    22:40:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@191 -- # create_device 1 0
00:13:45.852    22:40:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@191 -- # jq -r .handle
00:13:45.852    22:40:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=1
00:13:45.852    22:40:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:13:45.852    22:40:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:46.111  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:46.111  I0000 00:00:1733866846.826538  140125 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:46.111  I0000 00:00:1733866846.828267  140125 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:46.111   22:40:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@191 -- # tmp1=nvme:nqn.2016-06.io.spdk:vfiouser-1
00:13:46.111    22:40:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@193 -- # vm_count_nvme 0
00:13:46.111    22:40:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@68 -- # vm_exec 0 'grep -sl SPDK /sys/class/nvme/*/model || true'
00:13:46.111    22:40:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@68 -- # wc -l
00:13:46.111    22:40:46 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:13:46.111    22:40:46 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:46.111    22:40:46 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:46.111    22:40:46 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:13:46.111    22:40:46 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:13:46.111     22:40:46 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:13:46.111     22:40:46 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:13:46.111     22:40:46 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:46.111     22:40:46 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:46.111     22:40:46 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:13:46.111     22:40:46 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:13:46.111    22:40:46 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -sl SPDK /sys/class/nvme/*/model || true'
00:13:46.370  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:13:46.370   22:40:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@193 -- # [[ 2 -eq 2 ]]
00:13:46.370    22:40:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@195 -- # rpc_cmd nvmf_get_subsystems
00:13:46.370    22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:46.370    22:40:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@195 -- # jq -r '. | length'
00:13:46.370    22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:46.370    22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:46.370   22:40:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@195 -- # [[ 3 -eq 3 ]]
00:13:46.370   22:40:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@196 -- # [[ nvme:nqn.2016-06.io.spdk:vfiouser-0 == \n\v\m\e\:\n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\v\f\i\o\u\s\e\r\-\0 ]]
00:13:46.370   22:40:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@197 -- # [[ nvme:nqn.2016-06.io.spdk:vfiouser-1 == \n\v\m\e\:\n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\v\f\i\o\u\s\e\r\-\1 ]]
00:13:46.370   22:40:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@200 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-0
00:13:46.370   22:40:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:46.630  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:46.630  I0000 00:00:1733866847.260623  140349 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:46.630  I0000 00:00:1733866847.262497  140349 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:46.630  {}
00:13:46.630   22:40:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@201 -- # NOT rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:13:46.630   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:13:46.630   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:13:46.630   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:13:46.630   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:46.630    22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:13:46.630   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:46.630   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:13:46.630   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:46.630   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:46.630  [2024-12-10 22:40:47.309389] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist
00:13:46.630  request:
00:13:46.630  {
00:13:46.630  "nqn": "nqn.2016-06.io.spdk:vfiouser-0",
00:13:46.630  "method": "nvmf_get_subsystems",
00:13:46.630  "req_id": 1
00:13:46.630  }
00:13:46.630  Got JSON-RPC error response
00:13:46.630  response:
00:13:46.630  {
00:13:46.630  "code": -19,
00:13:46.630  "message": "No such device"
00:13:46.630  }
00:13:46.630   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:13:46.630   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:13:46.630   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:46.630   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:13:46.630   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:46.630   22:40:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@202 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:13:46.630   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:46.630   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:46.630  [
00:13:46.630  {
00:13:46.630  "nqn": "nqn.2016-06.io.spdk:vfiouser-1",
00:13:46.630  "subtype": "NVMe",
00:13:46.630  "listen_addresses": [
00:13:46.630  {
00:13:46.630  "trtype": "VFIOUSER",
00:13:46.630  "adrfam": "IPv4",
00:13:46.630  "traddr": "/var/tmp/vfiouser-1",
00:13:46.630  "trsvcid": ""
00:13:46.630  }
00:13:46.630  ],
00:13:46.630  "allow_any_host": true,
00:13:46.630  "hosts": [],
00:13:46.630  "serial_number": "00000000000000000000",
00:13:46.630  "model_number": "SPDK bdev Controller",
00:13:46.630  "max_namespaces": 32,
00:13:46.630  "min_cntlid": 1,
00:13:46.630  "max_cntlid": 65519,
00:13:46.630  "namespaces": []
00:13:46.630  }
00:13:46.630  ]
00:13:46.630   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:46.630    22:40:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@203 -- # rpc_cmd nvmf_get_subsystems
00:13:46.630    22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:46.630    22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:46.630    22:40:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@203 -- # jq -r '. | length'
00:13:46.630    22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:46.630   22:40:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@203 -- # [[ 2 -eq 2 ]]
00:13:46.630    22:40:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@204 -- # vm_count_nvme 0
00:13:46.630    22:40:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@68 -- # vm_exec 0 'grep -sl SPDK /sys/class/nvme/*/model || true'
00:13:46.630    22:40:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@68 -- # wc -l
00:13:46.630    22:40:47 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:13:46.630    22:40:47 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:46.630    22:40:47 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:46.630    22:40:47 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:13:46.630    22:40:47 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:13:46.630     22:40:47 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:13:46.630     22:40:47 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:13:46.630     22:40:47 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:46.630     22:40:47 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:46.630     22:40:47 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:13:46.630     22:40:47 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:13:46.631    22:40:47 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -sl SPDK /sys/class/nvme/*/model || true'
00:13:46.631  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:13:46.890   22:40:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@204 -- # [[ 1 -eq 1 ]]
00:13:46.890   22:40:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@206 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-1
00:13:46.890   22:40:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:47.149  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:47.149  I0000 00:00:1733866847.719646  140390 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:47.149  I0000 00:00:1733866847.721201  140390 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:47.149  {}
00:13:47.149   22:40:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@207 -- # NOT rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:13:47.149   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:13:47.149   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:13:47.149   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:13:47.149   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:47.149    22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:13:47.149   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:47.149   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:13:47.149   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:47.149   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:47.149  [2024-12-10 22:40:47.766844] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist
00:13:47.149  request:
00:13:47.149  {
00:13:47.149  "nqn": "nqn.2016-06.io.spdk:vfiouser-0",
00:13:47.149  "method": "nvmf_get_subsystems",
00:13:47.149  "req_id": 1
00:13:47.149  }
00:13:47.149  Got JSON-RPC error response
00:13:47.149  response:
00:13:47.149  {
00:13:47.149  "code": -19,
00:13:47.149  "message": "No such device"
00:13:47.149  }
00:13:47.149   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:13:47.149   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:13:47.149   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:47.149   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:13:47.149   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:47.149   22:40:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@208 -- # NOT rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:13:47.149   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:13:47.149   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:13:47.149   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:13:47.149   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:47.149    22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:13:47.149   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:47.149   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:13:47.149   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:47.149   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:47.149  [2024-12-10 22:40:47.778893] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-1' does not exist
00:13:47.149  request:
00:13:47.149  {
00:13:47.149  "nqn": "nqn.2016-06.io.spdk:vfiouser-1",
00:13:47.149  "method": "nvmf_get_subsystems",
00:13:47.149  "req_id": 1
00:13:47.149  }
00:13:47.149  Got JSON-RPC error response
00:13:47.149  response:
00:13:47.149  {
00:13:47.149  "code": -19,
00:13:47.149  "message": "No such device"
00:13:47.149  }
00:13:47.149   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:13:47.149   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:13:47.149   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:47.149   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:13:47.149   22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:47.149    22:40:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@209 -- # rpc_cmd nvmf_get_subsystems
00:13:47.149    22:40:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@209 -- # jq -r '. | length'
00:13:47.149    22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:47.149    22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:47.149    22:40:47 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:47.149   22:40:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@209 -- # [[ 1 -eq 1 ]]
00:13:47.149    22:40:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@210 -- # vm_count_nvme 0
00:13:47.149    22:40:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@68 -- # vm_exec 0 'grep -sl SPDK /sys/class/nvme/*/model || true'
00:13:47.149    22:40:47 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:13:47.149    22:40:47 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:47.149    22:40:47 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:47.150    22:40:47 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:13:47.150    22:40:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@68 -- # wc -l
00:13:47.150    22:40:47 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:13:47.150     22:40:47 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:13:47.150     22:40:47 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:13:47.150     22:40:47 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:47.150     22:40:47 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:47.150     22:40:47 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:13:47.150     22:40:47 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:13:47.150    22:40:47 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -sl SPDK /sys/class/nvme/*/model || true'
00:13:47.150  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:13:47.409   22:40:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@210 -- # [[ 0 -eq 0 ]]
00:13:47.409   22:40:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@213 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-0
00:13:47.409   22:40:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:47.667  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:47.667  I0000 00:00:1733866848.269292  140621 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:47.667  I0000 00:00:1733866848.270951  140621 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:47.667  [2024-12-10 22:40:48.276387] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist
00:13:47.667  {}
00:13:47.667   22:40:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@214 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-1
00:13:47.667   22:40:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:47.926  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:47.926  I0000 00:00:1733866848.502025  140650 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:47.926  I0000 00:00:1733866848.503574  140650 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:47.926  [2024-12-10 22:40:48.509050] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-1' does not exist
00:13:47.926  {}
00:13:47.926    22:40:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@217 -- # create_device 0 0
00:13:47.926    22:40:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@217 -- # jq -r .handle
00:13:47.926    22:40:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=0
00:13:47.926    22:40:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:13:47.926    22:40:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:48.185  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:48.185  I0000 00:00:1733866848.723671  140675 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:48.185  I0000 00:00:1733866848.725182  140675 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:48.185  [2024-12-10 22:40:48.729656] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist
00:13:48.185   22:40:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@217 -- # device0=nvme:nqn.2016-06.io.spdk:vfiouser-0
00:13:48.185    22:40:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@218 -- # create_device 1 0
00:13:48.185    22:40:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=1
00:13:48.185    22:40:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:13:48.185    22:40:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:48.185    22:40:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@218 -- # jq -r .handle
00:13:48.444  [2024-12-10 22:40:48.987546] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-0: enabling controller
00:13:48.444  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:48.444  I0000 00:00:1733866849.101858  140700 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:48.444  I0000 00:00:1733866849.103560  140700 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:48.444  [2024-12-10 22:40:49.107202] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-1' does not exist
00:13:48.702   22:40:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@218 -- # device1=nvme:nqn.2016-06.io.spdk:vfiouser-1
00:13:48.702    22:40:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@219 -- # rpc_cmd bdev_get_bdevs -b null0
00:13:48.702    22:40:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:48.702    22:40:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:48.702    22:40:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@219 -- # jq -r '.[].uuid'
00:13:48.702    22:40:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:48.702   22:40:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@219 -- # uuid0=989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:13:48.702    22:40:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@220 -- # rpc_cmd bdev_get_bdevs -b null1
00:13:48.702    22:40:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:48.702    22:40:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:48.702    22:40:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@220 -- # jq -r '.[].uuid'
00:13:48.702    22:40:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:48.702   22:40:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@220 -- # uuid1=cad3ca8c-1117-4443-8497-c5fd126a2cbc
00:13:48.702   22:40:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@223 -- # attach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:13:48.702   22:40:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:48.702    22:40:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # uuid2base64 989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:13:48.702    22:40:49 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:13:48.702  [2024-12-10 22:40:49.367195] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-1: enabling controller
00:13:48.961  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:48.961  I0000 00:00:1733866849.575243  140927 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:48.961  I0000 00:00:1733866849.577031  140927 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:48.961  {}
00:13:48.961    22:40:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@224 -- # jq -r '.[0].namespaces | length'
00:13:48.961    22:40:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@224 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:13:48.961    22:40:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:48.961    22:40:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:48.961    22:40:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:48.961   22:40:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@224 -- # [[ 1 -eq 1 ]]
00:13:48.961    22:40:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@225 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:13:48.961    22:40:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:48.961    22:40:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:48.961    22:40:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@225 -- # jq -r '.[0].namespaces | length'
00:13:48.961    22:40:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:48.961   22:40:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@225 -- # [[ 0 -eq 0 ]]
00:13:48.961    22:40:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@226 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:13:48.961    22:40:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:48.961    22:40:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@226 -- # jq -r '.[0].namespaces[0].uuid'
00:13:48.961    22:40:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:48.961    22:40:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:49.220   22:40:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@226 -- # [[ 989c825d-cb6d-4a6a-8407-8b67d82d2f97 == \9\8\9\c\8\2\5\d\-\c\b\6\d\-\4\a\6\a\-\8\4\0\7\-\8\b\6\7\d\8\2\d\2\f\9\7 ]]
00:13:49.220   22:40:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@227 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:13:49.220   22:40:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:13:49.220   22:40:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-0
00:13:49.220   22:40:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:13:49.220    22:40:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:13:49.220    22:40:49 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:13:49.220    22:40:49 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:49.220    22:40:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:13:49.220    22:40:49 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:49.220    22:40:49 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:13:49.220    22:40:49 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:13:49.220     22:40:49 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:13:49.220     22:40:49 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:13:49.220     22:40:49 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:49.220     22:40:49 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:49.220     22:40:49 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:13:49.220     22:40:49 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:13:49.220    22:40:49 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:13:49.220  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:13:49.220   22:40:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme0
00:13:49.220   22:40:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme0 ]]
00:13:49.220    22:40:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 989c825d-cb6d-4a6a-8407-8b67d82d2f97 /sys/class/nvme/nvme0/nvme*/uuid'
00:13:49.220    22:40:49 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:13:49.220    22:40:49 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:49.220    22:40:49 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:49.220    22:40:49 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:13:49.220    22:40:49 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:13:49.220     22:40:49 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:13:49.220     22:40:49 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:13:49.220     22:40:49 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:49.220     22:40:49 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:49.220     22:40:49 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:13:49.220     22:40:49 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:13:49.220    22:40:49 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 989c825d-cb6d-4a6a-8407-8b67d82d2f97 /sys/class/nvme/nvme0/nvme*/uuid'
00:13:49.220  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:13:49.479   22:40:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=/sys/class/nvme/nvme0/nvme0c0n1/uuid
00:13:49.479   22:40:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z /sys/class/nvme/nvme0/nvme0c0n1/uuid ]]
00:13:49.479   22:40:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@229 -- # attach_volume nvme:nqn.2016-06.io.spdk:vfiouser-1 cad3ca8c-1117-4443-8497-c5fd126a2cbc
00:13:49.479   22:40:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:49.479    22:40:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # uuid2base64 cad3ca8c-1117-4443-8497-c5fd126a2cbc
00:13:49.479    22:40:50 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:13:49.737  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:49.737  I0000 00:00:1733866850.329586  140976 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:49.737  I0000 00:00:1733866850.331557  140976 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:49.737  {}
00:13:49.737    22:40:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@230 -- # jq -r '.[0].namespaces | length'
00:13:49.737    22:40:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@230 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:13:49.737    22:40:50 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:49.737    22:40:50 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:49.737    22:40:50 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:49.737   22:40:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@230 -- # [[ 1 -eq 1 ]]
00:13:49.737    22:40:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@231 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:13:49.737    22:40:50 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:49.737    22:40:50 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:49.737    22:40:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@231 -- # jq -r '.[0].namespaces | length'
00:13:49.737    22:40:50 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:49.737   22:40:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@231 -- # [[ 1 -eq 1 ]]
00:13:49.737    22:40:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@232 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:13:49.737    22:40:50 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:49.737    22:40:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@232 -- # jq -r '.[0].namespaces[0].uuid'
00:13:49.737    22:40:50 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:49.737    22:40:50 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:49.997   22:40:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@232 -- # [[ 989c825d-cb6d-4a6a-8407-8b67d82d2f97 == \9\8\9\c\8\2\5\d\-\c\b\6\d\-\4\a\6\a\-\8\4\0\7\-\8\b\6\7\d\8\2\d\2\f\9\7 ]]
00:13:49.997    22:40:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@233 -- # jq -r '.[0].namespaces[0].uuid'
00:13:49.997    22:40:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@233 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:13:49.997    22:40:50 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:49.997    22:40:50 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:49.997    22:40:50 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:49.997   22:40:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@233 -- # [[ cad3ca8c-1117-4443-8497-c5fd126a2cbc == \c\a\d\3\c\a\8\c\-\1\1\1\7\-\4\4\4\3\-\8\4\9\7\-\c\5\f\d\1\2\6\a\2\c\b\c ]]
00:13:49.997   22:40:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@234 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 cad3ca8c-1117-4443-8497-c5fd126a2cbc
00:13:49.997   22:40:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:13:49.997   22:40:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-1
00:13:49.997   22:40:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=cad3ca8c-1117-4443-8497-c5fd126a2cbc
00:13:49.997    22:40:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:13:49.997    22:40:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:13:49.997    22:40:50 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:13:49.997    22:40:50 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:49.997    22:40:50 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:49.997    22:40:50 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:13:49.997    22:40:50 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:13:49.997     22:40:50 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:13:49.997     22:40:50 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:13:49.997     22:40:50 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:49.997     22:40:50 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:49.997     22:40:50 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:13:49.997     22:40:50 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:13:49.997    22:40:50 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:13:49.997  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:13:49.997   22:40:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme1
00:13:49.997   22:40:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme1 ]]
00:13:49.997    22:40:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l cad3ca8c-1117-4443-8497-c5fd126a2cbc /sys/class/nvme/nvme1/nvme*/uuid'
00:13:49.997    22:40:50 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:13:49.997    22:40:50 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:49.997    22:40:50 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:49.997    22:40:50 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:13:49.997    22:40:50 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:13:49.997     22:40:50 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:13:49.997     22:40:50 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:13:49.997     22:40:50 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:49.997     22:40:50 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:49.997     22:40:50 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:13:49.997     22:40:50 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:13:49.997    22:40:50 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l cad3ca8c-1117-4443-8497-c5fd126a2cbc /sys/class/nvme/nvme1/nvme*/uuid'
00:13:49.997  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:13:50.256   22:40:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=/sys/class/nvme/nvme1/nvme1c1n1/uuid
00:13:50.256   22:40:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z /sys/class/nvme/nvme1/nvme1c1n1/uuid ]]
00:13:50.256   22:40:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@237 -- # attach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:13:50.256   22:40:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:50.256    22:40:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # uuid2base64 989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:13:50.256    22:40:50 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:13:50.513  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:50.513  I0000 00:00:1733866851.205824  141225 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:50.513  I0000 00:00:1733866851.207653  141225 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:50.513  {}
00:13:50.513   22:40:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@238 -- # attach_volume nvme:nqn.2016-06.io.spdk:vfiouser-1 cad3ca8c-1117-4443-8497-c5fd126a2cbc
00:13:50.513   22:40:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:50.513    22:40:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # uuid2base64 cad3ca8c-1117-4443-8497-c5fd126a2cbc
00:13:50.513    22:40:51 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:13:50.770  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:50.770  I0000 00:00:1733866851.526322  141256 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:50.770  I0000 00:00:1733866851.528432  141256 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:51.029  {}
00:13:51.029    22:40:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@239 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:13:51.029    22:40:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@239 -- # jq -r '.[0].namespaces | length'
00:13:51.029    22:40:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:51.029    22:40:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:51.029    22:40:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:51.029   22:40:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@239 -- # [[ 1 -eq 1 ]]
00:13:51.029    22:40:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@240 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:13:51.029    22:40:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:51.029    22:40:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:51.029    22:40:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@240 -- # jq -r '.[0].namespaces | length'
00:13:51.029    22:40:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:51.029   22:40:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@240 -- # [[ 1 -eq 1 ]]
00:13:51.029    22:40:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@241 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:13:51.029    22:40:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@241 -- # jq -r '.[0].namespaces[0].uuid'
00:13:51.029    22:40:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:51.029    22:40:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:51.029    22:40:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:51.029   22:40:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@241 -- # [[ 989c825d-cb6d-4a6a-8407-8b67d82d2f97 == \9\8\9\c\8\2\5\d\-\c\b\6\d\-\4\a\6\a\-\8\4\0\7\-\8\b\6\7\d\8\2\d\2\f\9\7 ]]
00:13:51.029    22:40:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@242 -- # jq -r '.[0].namespaces[0].uuid'
00:13:51.029    22:40:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@242 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:13:51.029    22:40:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:51.029    22:40:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:51.029    22:40:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:51.029   22:40:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@242 -- # [[ cad3ca8c-1117-4443-8497-c5fd126a2cbc == \c\a\d\3\c\a\8\c\-\1\1\1\7\-\4\4\4\3\-\8\4\9\7\-\c\5\f\d\1\2\6\a\2\c\b\c ]]
00:13:51.029   22:40:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@243 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:13:51.029   22:40:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:13:51.029   22:40:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-0
00:13:51.029   22:40:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:13:51.029    22:40:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:13:51.029    22:40:51 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:13:51.029    22:40:51 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:51.029    22:40:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:13:51.029    22:40:51 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:51.029    22:40:51 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:13:51.029    22:40:51 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:13:51.029     22:40:51 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:13:51.029     22:40:51 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:13:51.030     22:40:51 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:51.030     22:40:51 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:51.030     22:40:51 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:13:51.030     22:40:51 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:13:51.030    22:40:51 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:13:51.289  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:13:51.289   22:40:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme0
00:13:51.289   22:40:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme0 ]]
00:13:51.289    22:40:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 989c825d-cb6d-4a6a-8407-8b67d82d2f97 /sys/class/nvme/nvme0/nvme*/uuid'
00:13:51.289    22:40:51 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:13:51.289    22:40:51 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:51.289    22:40:51 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:51.289    22:40:51 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:13:51.289    22:40:51 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:13:51.289     22:40:51 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:13:51.289     22:40:51 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:13:51.289     22:40:51 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:51.289     22:40:51 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:51.289     22:40:51 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:13:51.289     22:40:51 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:13:51.289    22:40:51 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 989c825d-cb6d-4a6a-8407-8b67d82d2f97 /sys/class/nvme/nvme0/nvme*/uuid'
00:13:51.289  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:13:51.548   22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=/sys/class/nvme/nvme0/nvme0c0n1/uuid
00:13:51.548   22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z /sys/class/nvme/nvme0/nvme0c0n1/uuid ]]
00:13:51.549   22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@244 -- # NOT vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 cad3ca8c-1117-4443-8497-c5fd126a2cbc
00:13:51.549   22:40:52 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:13:51.549   22:40:52 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 cad3ca8c-1117-4443-8497-c5fd126a2cbc
00:13:51.549   22:40:52 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=vm_check_subsys_volume
00:13:51.549   22:40:52 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:51.549    22:40:52 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t vm_check_subsys_volume
00:13:51.549   22:40:52 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:51.549   22:40:52 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 cad3ca8c-1117-4443-8497-c5fd126a2cbc
00:13:51.549   22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:13:51.549   22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-0
00:13:51.549   22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=cad3ca8c-1117-4443-8497-c5fd126a2cbc
00:13:51.549    22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:13:51.549    22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:13:51.549    22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:51.549    22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:13:51.549    22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:51.549    22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:13:51.549    22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:13:51.549     22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:13:51.549     22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:13:51.549     22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:51.549     22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:51.549     22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:13:51.549     22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:13:51.549    22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:13:51.549  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:13:51.549   22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme0
00:13:51.549   22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme0 ]]
00:13:51.549    22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l cad3ca8c-1117-4443-8497-c5fd126a2cbc /sys/class/nvme/nvme0/nvme*/uuid'
00:13:51.549    22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:13:51.549    22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:51.549    22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:51.549    22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:13:51.549    22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:13:51.549     22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:13:51.549     22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:13:51.549     22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:51.549     22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:51.549     22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:13:51.549     22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:13:51.549    22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l cad3ca8c-1117-4443-8497-c5fd126a2cbc /sys/class/nvme/nvme0/nvme*/uuid'
00:13:51.549  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:13:51.808   22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=
00:13:51.808   22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z '' ]]
00:13:51.808   22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@84 -- # return 1
00:13:51.808   22:40:52 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:13:51.808   22:40:52 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:51.808   22:40:52 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:13:51.808   22:40:52 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:51.808   22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@245 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 cad3ca8c-1117-4443-8497-c5fd126a2cbc
00:13:51.808   22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:13:51.808   22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-1
00:13:51.808   22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=cad3ca8c-1117-4443-8497-c5fd126a2cbc
00:13:51.808    22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:13:51.808    22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:13:51.808    22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:13:51.808    22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:51.808    22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:51.808    22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:13:51.808    22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:13:51.808     22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:13:51.808     22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:13:51.808     22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:51.808     22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:51.808     22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:13:51.808     22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:13:51.808    22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:13:51.808  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:13:51.808   22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme1
00:13:51.808   22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme1 ]]
00:13:51.808    22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l cad3ca8c-1117-4443-8497-c5fd126a2cbc /sys/class/nvme/nvme1/nvme*/uuid'
00:13:51.808    22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:13:51.809    22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:51.809    22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:51.809    22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:13:51.809    22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:13:51.809     22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:13:51.809     22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:13:51.809     22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:51.809     22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:51.809     22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:13:51.809     22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:13:51.809    22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l cad3ca8c-1117-4443-8497-c5fd126a2cbc /sys/class/nvme/nvme1/nvme*/uuid'
00:13:51.809  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:13:52.068   22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=/sys/class/nvme/nvme1/nvme1c1n1/uuid
00:13:52.068   22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z /sys/class/nvme/nvme1/nvme1c1n1/uuid ]]
00:13:52.068   22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@246 -- # NOT vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:13:52.068   22:40:52 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:13:52.068   22:40:52 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:13:52.068   22:40:52 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=vm_check_subsys_volume
00:13:52.068   22:40:52 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:52.068    22:40:52 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t vm_check_subsys_volume
00:13:52.068   22:40:52 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:52.068   22:40:52 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:13:52.068   22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:13:52.068   22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-1
00:13:52.068   22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:13:52.068    22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:13:52.068    22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:13:52.068    22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:13:52.068    22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:52.068    22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:52.068    22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:13:52.068    22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:13:52.068     22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:13:52.068     22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:13:52.068     22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:52.068     22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:52.068     22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:13:52.068     22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:13:52.068    22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:13:52.068  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:13:52.068   22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme1
00:13:52.068   22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme1 ]]
00:13:52.068    22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 989c825d-cb6d-4a6a-8407-8b67d82d2f97 /sys/class/nvme/nvme1/nvme*/uuid'
00:13:52.068    22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:13:52.068    22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:52.068    22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:52.068    22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:13:52.068    22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:13:52.068     22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:13:52.068     22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:13:52.068     22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:52.068     22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:52.068     22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:13:52.068     22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:13:52.068    22:40:52 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 989c825d-cb6d-4a6a-8407-8b67d82d2f97 /sys/class/nvme/nvme1/nvme*/uuid'
00:13:52.327  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:13:52.327   22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=
00:13:52.327   22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z '' ]]
00:13:52.327   22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@84 -- # return 1
00:13:52.327   22:40:52 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:13:52.327   22:40:52 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:52.327   22:40:52 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:13:52.327   22:40:52 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:52.327   22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@249 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 cad3ca8c-1117-4443-8497-c5fd126a2cbc
00:13:52.327   22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:52.327    22:40:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 cad3ca8c-1117-4443-8497-c5fd126a2cbc
00:13:52.327    22:40:52 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:13:52.586  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:52.586  I0000 00:00:1733866853.229733  141746 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:52.586  I0000 00:00:1733866853.231580  141746 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:52.586  {}
00:13:52.586   22:40:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@250 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-1 989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:13:52.586   22:40:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:52.586    22:40:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:13:52.586    22:40:53 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:13:52.845  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:52.845  I0000 00:00:1733866853.562689  141775 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:52.845  I0000 00:00:1733866853.564642  141775 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:52.845  {}
00:13:52.845    22:40:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@251 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:13:52.845    22:40:53 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:52.845    22:40:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@251 -- # jq -r '.[0].namespaces | length'
00:13:52.845    22:40:53 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:53.104    22:40:53 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:53.104   22:40:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@251 -- # [[ 1 -eq 1 ]]
00:13:53.104    22:40:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@252 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:13:53.104    22:40:53 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:53.104    22:40:53 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:53.104    22:40:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@252 -- # jq -r '.[0].namespaces | length'
00:13:53.104    22:40:53 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:53.104   22:40:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@252 -- # [[ 1 -eq 1 ]]
00:13:53.104    22:40:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@253 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:13:53.104    22:40:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@253 -- # jq -r '.[0].namespaces[0].uuid'
00:13:53.104    22:40:53 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:53.104    22:40:53 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:53.104    22:40:53 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:53.104   22:40:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@253 -- # [[ 989c825d-cb6d-4a6a-8407-8b67d82d2f97 == \9\8\9\c\8\2\5\d\-\c\b\6\d\-\4\a\6\a\-\8\4\0\7\-\8\b\6\7\d\8\2\d\2\f\9\7 ]]
00:13:53.104    22:40:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@254 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:13:53.104    22:40:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@254 -- # jq -r '.[0].namespaces[0].uuid'
00:13:53.104    22:40:53 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:53.104    22:40:53 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:53.104    22:40:53 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:53.104   22:40:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@254 -- # [[ cad3ca8c-1117-4443-8497-c5fd126a2cbc == \c\a\d\3\c\a\8\c\-\1\1\1\7\-\4\4\4\3\-\8\4\9\7\-\c\5\f\d\1\2\6\a\2\c\b\c ]]
00:13:53.104   22:40:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@255 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:13:53.104   22:40:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:13:53.104   22:40:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-0
00:13:53.104   22:40:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:13:53.104    22:40:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:13:53.104    22:40:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:13:53.104    22:40:53 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:13:53.104    22:40:53 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:53.104    22:40:53 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:53.104    22:40:53 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:13:53.104    22:40:53 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:13:53.104     22:40:53 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:13:53.104     22:40:53 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:13:53.104     22:40:53 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:53.104     22:40:53 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:53.104     22:40:53 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:13:53.104     22:40:53 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:13:53.104    22:40:53 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:13:53.104  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:13:53.363   22:40:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme0
00:13:53.363   22:40:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme0 ]]
00:13:53.363    22:40:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 989c825d-cb6d-4a6a-8407-8b67d82d2f97 /sys/class/nvme/nvme0/nvme*/uuid'
00:13:53.363    22:40:53 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:13:53.363    22:40:53 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:53.363    22:40:53 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:53.363    22:40:53 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:13:53.363    22:40:53 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:13:53.363     22:40:53 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:13:53.363     22:40:53 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:13:53.363     22:40:53 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:53.363     22:40:53 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:53.363     22:40:53 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:13:53.363     22:40:53 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:13:53.363    22:40:53 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 989c825d-cb6d-4a6a-8407-8b67d82d2f97 /sys/class/nvme/nvme0/nvme*/uuid'
00:13:53.363  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:13:53.363   22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=/sys/class/nvme/nvme0/nvme0c0n1/uuid
00:13:53.363   22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z /sys/class/nvme/nvme0/nvme0c0n1/uuid ]]
00:13:53.363   22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@256 -- # NOT vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 cad3ca8c-1117-4443-8497-c5fd126a2cbc
00:13:53.363   22:40:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:13:53.363   22:40:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 cad3ca8c-1117-4443-8497-c5fd126a2cbc
00:13:53.363   22:40:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=vm_check_subsys_volume
00:13:53.363   22:40:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:53.363    22:40:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t vm_check_subsys_volume
00:13:53.363   22:40:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:53.363   22:40:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 cad3ca8c-1117-4443-8497-c5fd126a2cbc
00:13:53.363   22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:13:53.363   22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-0
00:13:53.363   22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=cad3ca8c-1117-4443-8497-c5fd126a2cbc
00:13:53.363    22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:13:53.363    22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:13:53.363    22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:13:53.363    22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:53.363    22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:53.363    22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:13:53.363    22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:13:53.363     22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:13:53.363     22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:13:53.363     22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:53.363     22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:53.363     22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:13:53.363     22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:13:53.363    22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:13:53.363  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:13:53.623   22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme0
00:13:53.623   22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme0 ]]
00:13:53.623    22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l cad3ca8c-1117-4443-8497-c5fd126a2cbc /sys/class/nvme/nvme0/nvme*/uuid'
00:13:53.623    22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:13:53.623    22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:53.623    22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:53.623    22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:13:53.623    22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:13:53.623     22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:13:53.623     22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:13:53.623     22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:53.623     22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:53.623     22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:13:53.623     22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:13:53.623    22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l cad3ca8c-1117-4443-8497-c5fd126a2cbc /sys/class/nvme/nvme0/nvme*/uuid'
00:13:53.623  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:13:53.623   22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=
00:13:53.623   22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z '' ]]
00:13:53.623   22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@84 -- # return 1
00:13:53.623   22:40:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:13:53.623   22:40:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:53.623   22:40:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:13:53.623   22:40:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:53.623   22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@257 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 cad3ca8c-1117-4443-8497-c5fd126a2cbc
00:13:53.623   22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:13:53.623   22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-1
00:13:53.623   22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=cad3ca8c-1117-4443-8497-c5fd126a2cbc
00:13:53.623    22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:13:53.623    22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:13:53.623    22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:53.623    22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:13:53.623    22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:53.623    22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:13:53.623    22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:13:53.623     22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:13:53.623     22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:13:53.623     22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:53.623     22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:53.623     22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:13:53.623     22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:13:53.623    22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:13:53.882  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:13:53.882   22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme1
00:13:53.882   22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme1 ]]
00:13:53.882    22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l cad3ca8c-1117-4443-8497-c5fd126a2cbc /sys/class/nvme/nvme1/nvme*/uuid'
00:13:53.882    22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:13:53.882    22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:53.882    22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:53.882    22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:13:53.882    22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:13:53.882     22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:13:53.882     22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:13:53.882     22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:53.883     22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:53.883     22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:13:53.883     22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:13:53.883    22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l cad3ca8c-1117-4443-8497-c5fd126a2cbc /sys/class/nvme/nvme1/nvme*/uuid'
00:13:53.883  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:13:54.142   22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=/sys/class/nvme/nvme1/nvme1c1n1/uuid
00:13:54.142   22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z /sys/class/nvme/nvme1/nvme1c1n1/uuid ]]
00:13:54.142   22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@258 -- # NOT vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:13:54.142   22:40:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:13:54.142   22:40:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:13:54.142   22:40:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=vm_check_subsys_volume
00:13:54.142   22:40:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:54.142    22:40:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t vm_check_subsys_volume
00:13:54.142   22:40:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:54.142   22:40:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:13:54.142   22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:13:54.142   22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-1
00:13:54.142   22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:13:54.142    22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:13:54.142    22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:13:54.142    22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:54.142    22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:54.142    22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:13:54.142    22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:13:54.142    22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:13:54.142     22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:13:54.142     22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:13:54.142     22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:54.142     22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:54.142     22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:13:54.142     22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:13:54.142    22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:13:54.142  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:13:54.142   22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme1
00:13:54.142   22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme1 ]]
00:13:54.142    22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 989c825d-cb6d-4a6a-8407-8b67d82d2f97 /sys/class/nvme/nvme1/nvme*/uuid'
00:13:54.142    22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:13:54.142    22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:54.142    22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:54.142    22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:13:54.142    22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:13:54.142     22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:13:54.142     22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:13:54.142     22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:54.142     22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:54.142     22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:13:54.142     22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:13:54.142    22:40:54 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 989c825d-cb6d-4a6a-8407-8b67d82d2f97 /sys/class/nvme/nvme1/nvme*/uuid'
00:13:54.142  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:13:54.401   22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=
00:13:54.401   22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z '' ]]
00:13:54.401   22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@84 -- # return 1
00:13:54.401   22:40:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:13:54.401   22:40:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:54.401   22:40:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:13:54.401   22:40:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:54.401   22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@261 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:13:54.401   22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:54.401    22:40:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:13:54.401    22:40:54 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:13:54.659  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:54.659  I0000 00:00:1733866855.265555  142080 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:54.659  I0000 00:00:1733866855.270578  142080 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:54.659  {}
00:13:54.659   22:40:55 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@262 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-1 cad3ca8c-1117-4443-8497-c5fd126a2cbc
00:13:54.659   22:40:55 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:54.659    22:40:55 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 cad3ca8c-1117-4443-8497-c5fd126a2cbc
00:13:54.659    22:40:55 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:13:54.918  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:54.918  I0000 00:00:1733866855.564128  142292 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:54.918  I0000 00:00:1733866855.565863  142292 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:54.918  {}
00:13:54.918    22:40:55 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@263 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:13:54.918    22:40:55 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:54.918    22:40:55 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:54.918    22:40:55 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@263 -- # jq -r '.[0].namespaces | length'
00:13:54.918    22:40:55 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:54.918   22:40:55 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@263 -- # [[ 0 -eq 0 ]]
00:13:54.918    22:40:55 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@264 -- # jq -r '.[0].namespaces | length'
00:13:54.918    22:40:55 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@264 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:13:54.918    22:40:55 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:54.918    22:40:55 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:54.918    22:40:55 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:55.178   22:40:55 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@264 -- # [[ 0 -eq 0 ]]
00:13:55.178   22:40:55 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@265 -- # NOT vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:13:55.178   22:40:55 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:13:55.178   22:40:55 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:13:55.178   22:40:55 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=vm_check_subsys_volume
00:13:55.178   22:40:55 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:55.178    22:40:55 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t vm_check_subsys_volume
00:13:55.178   22:40:55 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:55.178   22:40:55 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:13:55.178   22:40:55 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:13:55.178   22:40:55 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-0
00:13:55.178   22:40:55 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:13:55.178    22:40:55 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:13:55.178    22:40:55 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:13:55.178    22:40:55 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:55.178    22:40:55 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:55.178    22:40:55 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:13:55.178    22:40:55 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:13:55.178    22:40:55 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:13:55.178     22:40:55 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:13:55.178     22:40:55 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:13:55.178     22:40:55 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:55.178     22:40:55 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:55.178     22:40:55 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:13:55.178     22:40:55 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:13:55.178    22:40:55 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:13:55.178  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:13:55.178   22:40:55 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme0
00:13:55.178   22:40:55 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme0 ]]
00:13:55.178    22:40:55 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 989c825d-cb6d-4a6a-8407-8b67d82d2f97 /sys/class/nvme/nvme0/nvme*/uuid'
00:13:55.178    22:40:55 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:13:55.178    22:40:55 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:55.178    22:40:55 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:55.178    22:40:55 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:13:55.178    22:40:55 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:13:55.178     22:40:55 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:13:55.178     22:40:55 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:13:55.178     22:40:55 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:55.178     22:40:55 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:55.178     22:40:55 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:13:55.178     22:40:55 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:13:55.178    22:40:55 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 989c825d-cb6d-4a6a-8407-8b67d82d2f97 /sys/class/nvme/nvme0/nvme*/uuid'
00:13:55.178  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:13:55.437  grep: /sys/class/nvme/nvme0/nvme*/uuid: No such file or directory
00:13:55.437   22:40:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=
00:13:55.437   22:40:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z '' ]]
00:13:55.437   22:40:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@84 -- # return 1
00:13:55.437   22:40:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:13:55.437   22:40:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:55.437   22:40:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:13:55.437   22:40:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:55.437   22:40:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@266 -- # NOT vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 cad3ca8c-1117-4443-8497-c5fd126a2cbc
00:13:55.437   22:40:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:13:55.437   22:40:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 cad3ca8c-1117-4443-8497-c5fd126a2cbc
00:13:55.437   22:40:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=vm_check_subsys_volume
00:13:55.437   22:40:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:55.437    22:40:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t vm_check_subsys_volume
00:13:55.437   22:40:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:55.437   22:40:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 cad3ca8c-1117-4443-8497-c5fd126a2cbc
00:13:55.437   22:40:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:13:55.437   22:40:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-1
00:13:55.437   22:40:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=cad3ca8c-1117-4443-8497-c5fd126a2cbc
00:13:55.437    22:40:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:13:55.437    22:40:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:13:55.437    22:40:56 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:13:55.437    22:40:56 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:55.437    22:40:56 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:55.437    22:40:56 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:13:55.437    22:40:56 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:13:55.437     22:40:56 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:13:55.437     22:40:56 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:13:55.437     22:40:56 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:55.437     22:40:56 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:55.437     22:40:56 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:13:55.437     22:40:56 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:13:55.437    22:40:56 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:13:55.437  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:13:55.437   22:40:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme1
00:13:55.437   22:40:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme1 ]]
00:13:55.437    22:40:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l cad3ca8c-1117-4443-8497-c5fd126a2cbc /sys/class/nvme/nvme1/nvme*/uuid'
00:13:55.437    22:40:56 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:13:55.437    22:40:56 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:55.437    22:40:56 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:55.437    22:40:56 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:13:55.437    22:40:56 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:13:55.437     22:40:56 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:13:55.437     22:40:56 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:13:55.437     22:40:56 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:55.437     22:40:56 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:55.437     22:40:56 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:13:55.437     22:40:56 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:13:55.437    22:40:56 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l cad3ca8c-1117-4443-8497-c5fd126a2cbc /sys/class/nvme/nvme1/nvme*/uuid'
00:13:55.696  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:13:55.696  grep: /sys/class/nvme/nvme1/nvme*/uuid: No such file or directory
00:13:55.696   22:40:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=
00:13:55.696   22:40:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z '' ]]
00:13:55.696   22:40:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@84 -- # return 1
00:13:55.696   22:40:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:13:55.696   22:40:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:55.696   22:40:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:13:55.696   22:40:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:55.696   22:40:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@269 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:13:55.696   22:40:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:55.696    22:40:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:13:55.696    22:40:56 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:13:55.955  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:55.955  I0000 00:00:1733866856.584616  142549 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:55.955  I0000 00:00:1733866856.586453  142549 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:55.955  {}
00:13:55.955   22:40:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@270 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-1 cad3ca8c-1117-4443-8497-c5fd126a2cbc
00:13:55.955   22:40:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:55.955    22:40:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 cad3ca8c-1117-4443-8497-c5fd126a2cbc
00:13:55.955    22:40:56 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:13:56.214  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:56.214  I0000 00:00:1733866856.919103  142576 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:56.214  I0000 00:00:1733866856.920900  142576 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:56.214  {}
00:13:56.214   22:40:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@271 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 cad3ca8c-1117-4443-8497-c5fd126a2cbc
00:13:56.214   22:40:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:56.214    22:40:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 cad3ca8c-1117-4443-8497-c5fd126a2cbc
00:13:56.214    22:40:56 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:13:56.473  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:56.473  I0000 00:00:1733866857.213678  142604 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:56.473  I0000 00:00:1733866857.215652  142604 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:56.473  {}
00:13:56.732   22:40:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@272 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-1 989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:13:56.732   22:40:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:56.732    22:40:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:13:56.732    22:40:57 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:13:56.732  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:56.732  I0000 00:00:1733866857.513194  142628 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:56.732  I0000 00:00:1733866857.514998  142628 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:56.991  {}
00:13:56.991   22:40:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@274 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-0
00:13:56.991   22:40:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:56.991  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:56.991  I0000 00:00:1733866857.763805  142849 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:56.991  I0000 00:00:1733866857.765626  142849 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:57.249  {}
00:13:57.249   22:40:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@275 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-1
00:13:57.249   22:40:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:57.249  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:57.249  I0000 00:00:1733866858.023639  142870 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:57.249  I0000 00:00:1733866858.025446  142870 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:57.507  {}
00:13:57.507    22:40:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@278 -- # create_device 42 0
00:13:57.507    22:40:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=42
00:13:57.507    22:40:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@278 -- # jq -r .handle
00:13:57.507    22:40:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:13:57.507    22:40:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:57.507  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:57.507  I0000 00:00:1733866858.271816  142901 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:57.507  I0000 00:00:1733866858.273531  142901 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:57.507  [2024-12-10 22:40:58.279292] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-42' does not exist
00:13:57.766   22:40:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@278 -- # device3=nvme:nqn.2016-06.io.spdk:vfiouser-42
00:13:57.766   22:40:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@279 -- # vm_check_subsys_nqn 0 nqn.2016-06.io.spdk:vfiouser-42
00:13:57.766   22:40:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@89 -- # sleep 1
00:13:58.025  [2024-12-10 22:40:58.554265] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-42: enabling controller
00:13:58.961    22:40:59 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-42 /sys/class/nvme/*/subsysnqn'
00:13:58.961    22:40:59 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:13:58.961    22:40:59 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:58.961    22:40:59 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:58.961    22:40:59 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:13:58.961    22:40:59 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:13:58.961     22:40:59 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:13:58.961     22:40:59 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:13:58.961     22:40:59 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:13:58.961     22:40:59 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:13:58.961     22:40:59 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:13:58.961     22:40:59 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:13:58.961    22:40:59 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-42 /sys/class/nvme/*/subsysnqn'
00:13:58.961  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:13:58.961   22:40:59 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # nqn=/sys/class/nvme/nvme0/subsysnqn
00:13:58.961   22:40:59 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@91 -- # [[ -z /sys/class/nvme/nvme0/subsysnqn ]]
00:13:58.961   22:40:59 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@282 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-42
00:13:58.961   22:40:59 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:13:59.220  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:13:59.220  I0000 00:00:1733866859.805122  143137 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:13:59.220  I0000 00:00:1733866859.806974  143137 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:13:59.220  {}
00:13:59.220   22:40:59 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@283 -- # NOT vm_check_subsys_nqn 0 nqn.2016-06.io.spdk:vfiouser-42
00:13:59.220   22:40:59 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:13:59.220   22:40:59 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg vm_check_subsys_nqn 0 nqn.2016-06.io.spdk:vfiouser-42
00:13:59.220   22:40:59 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=vm_check_subsys_nqn
00:13:59.220   22:40:59 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:59.220    22:40:59 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t vm_check_subsys_nqn
00:13:59.220   22:40:59 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:59.220   22:40:59 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # vm_check_subsys_nqn 0 nqn.2016-06.io.spdk:vfiouser-42
00:13:59.220   22:40:59 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@89 -- # sleep 1
00:14:00.157    22:41:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-42 /sys/class/nvme/*/subsysnqn'
00:14:00.157    22:41:00 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:14:00.157    22:41:00 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:00.157    22:41:00 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:00.157    22:41:00 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:14:00.157    22:41:00 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:14:00.157     22:41:00 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:14:00.157     22:41:00 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:14:00.157     22:41:00 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:00.157     22:41:00 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:00.157     22:41:00 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:14:00.157     22:41:00 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:14:00.157    22:41:00 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-42 /sys/class/nvme/*/subsysnqn'
00:14:00.157  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:14:00.416  grep: /sys/class/nvme/*/subsysnqn: No such file or directory
00:14:00.416   22:41:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # nqn=
00:14:00.416   22:41:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@91 -- # [[ -z '' ]]
00:14:00.416   22:41:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@92 -- # error 'FAILED no NVMe on vm=0 with nqn=nqn.2016-06.io.spdk:vfiouser-42'
00:14:00.416   22:41:01 sma.sma_vfiouser_qemu -- vhost/common.sh@82 -- # echo ===========
00:14:00.416  ===========
00:14:00.416   22:41:01 sma.sma_vfiouser_qemu -- vhost/common.sh@83 -- # message ERROR 'FAILED no NVMe on vm=0 with nqn=nqn.2016-06.io.spdk:vfiouser-42'
00:14:00.416   22:41:01 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out
00:14:00.416   22:41:01 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false
00:14:00.416   22:41:01 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out=
00:14:00.416   22:41:01 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=ERROR
00:14:00.416   22:41:01 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift
00:14:00.416   22:41:01 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'ERROR: FAILED no NVMe on vm=0 with nqn=nqn.2016-06.io.spdk:vfiouser-42'
00:14:00.416  ERROR: FAILED no NVMe on vm=0 with nqn=nqn.2016-06.io.spdk:vfiouser-42
00:14:00.416   22:41:01 sma.sma_vfiouser_qemu -- vhost/common.sh@84 -- # echo ===========
00:14:00.416  ===========
00:14:00.416   22:41:01 sma.sma_vfiouser_qemu -- vhost/common.sh@86 -- # false
00:14:00.416   22:41:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@93 -- # return 1
00:14:00.416   22:41:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:14:00.416   22:41:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:14:00.416   22:41:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:14:00.416   22:41:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:14:00.416   22:41:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@285 -- # key0=1234567890abcdef1234567890abcdef
00:14:00.416    22:41:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@286 -- # create_device 0 0
00:14:00.416    22:41:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@286 -- # jq -r .handle
00:14:00.416    22:41:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=0
00:14:00.416    22:41:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:14:00.416    22:41:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:00.675  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:00.675  I0000 00:00:1733866861.239091  143459 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:00.675  I0000 00:00:1733866861.240918  143459 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:00.675  [2024-12-10 22:41:01.244960] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist
00:14:00.675   22:41:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@286 -- # device0=nvme:nqn.2016-06.io.spdk:vfiouser-0
00:14:00.675    22:41:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@287 -- # rpc_cmd bdev_get_bdevs -b null0
00:14:00.675    22:41:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@287 -- # jq -r '.[].uuid'
00:14:00.675    22:41:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:00.675    22:41:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:00.675    22:41:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:00.675   22:41:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@287 -- # uuid0=989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:14:00.675   22:41:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@290 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:00.675    22:41:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@290 -- # uuid2base64 989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:14:00.675    22:41:01 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:14:00.934    22:41:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@290 -- # get_cipher AES_CBC
00:14:00.934    22:41:01 sma.sma_vfiouser_qemu -- sma/common.sh@27 -- # case "$1" in
00:14:00.934    22:41:01 sma.sma_vfiouser_qemu -- sma/common.sh@28 -- # echo 0
00:14:00.934    22:41:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@290 -- # format_key 1234567890abcdef1234567890abcdef
00:14:00.934    22:41:01 sma.sma_vfiouser_qemu -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:14:00.934     22:41:01 sma.sma_vfiouser_qemu -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:14:00.934  [2024-12-10 22:41:01.502761] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-0: enabling controller
00:14:01.193  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:01.193  I0000 00:00:1733866861.730857  143620 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:01.193  I0000 00:00:1733866861.732780  143620 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:01.193  {}
00:14:01.193    22:41:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@307 -- # jq -r '.[0].namespaces[0].name'
00:14:01.193    22:41:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@307 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:14:01.193    22:41:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:01.193    22:41:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:01.193    22:41:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:01.193   22:41:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@307 -- # ns_bdev=ea372130-13fc-49e4-ae01-2178ed0092b7
00:14:01.193    22:41:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@308 -- # rpc_cmd bdev_get_bdevs -b ea372130-13fc-49e4-ae01-2178ed0092b7
00:14:01.193    22:41:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:01.193    22:41:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:01.193    22:41:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@308 -- # jq -r '.[0].product_name'
00:14:01.193    22:41:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:01.193   22:41:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@308 -- # [[ crypto == \c\r\y\p\t\o ]]
00:14:01.193    22:41:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@309 -- # rpc_cmd bdev_get_bdevs -b ea372130-13fc-49e4-ae01-2178ed0092b7
00:14:01.193    22:41:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:01.193    22:41:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:01.193    22:41:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@309 -- # jq -r '.[] | select(.product_name == "crypto")'
00:14:01.193    22:41:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:01.193   22:41:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@309 -- # crypto_bdev='{
00:14:01.193    "name": "ea372130-13fc-49e4-ae01-2178ed0092b7",
00:14:01.193    "aliases": [
00:14:01.193      "c3f438f1-5989-5e7d-8914-9807a72b0ae8"
00:14:01.193    ],
00:14:01.193    "product_name": "crypto",
00:14:01.193    "block_size": 4096,
00:14:01.193    "num_blocks": 25600,
00:14:01.193    "uuid": "c3f438f1-5989-5e7d-8914-9807a72b0ae8",
00:14:01.193    "assigned_rate_limits": {
00:14:01.193      "rw_ios_per_sec": 0,
00:14:01.193      "rw_mbytes_per_sec": 0,
00:14:01.193      "r_mbytes_per_sec": 0,
00:14:01.193      "w_mbytes_per_sec": 0
00:14:01.193    },
00:14:01.193    "claimed": true,
00:14:01.193    "claim_type": "exclusive_write",
00:14:01.193    "zoned": false,
00:14:01.193    "supported_io_types": {
00:14:01.193      "read": true,
00:14:01.193      "write": true,
00:14:01.193      "unmap": false,
00:14:01.193      "flush": false,
00:14:01.193      "reset": true,
00:14:01.193      "nvme_admin": false,
00:14:01.193      "nvme_io": false,
00:14:01.193      "nvme_io_md": false,
00:14:01.193      "write_zeroes": true,
00:14:01.193      "zcopy": false,
00:14:01.194      "get_zone_info": false,
00:14:01.194      "zone_management": false,
00:14:01.194      "zone_append": false,
00:14:01.194      "compare": false,
00:14:01.194      "compare_and_write": false,
00:14:01.194      "abort": false,
00:14:01.194      "seek_hole": false,
00:14:01.194      "seek_data": false,
00:14:01.194      "copy": false,
00:14:01.194      "nvme_iov_md": false
00:14:01.194    },
00:14:01.194    "memory_domains": [
00:14:01.194      {
00:14:01.194        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:14:01.194        "dma_device_type": 2
00:14:01.194      }
00:14:01.194    ],
00:14:01.194    "driver_specific": {
00:14:01.194      "crypto": {
00:14:01.194        "base_bdev_name": "null0",
00:14:01.194        "name": "ea372130-13fc-49e4-ae01-2178ed0092b7",
00:14:01.194        "key_name": "ea372130-13fc-49e4-ae01-2178ed0092b7_AES_CBC"
00:14:01.194      }
00:14:01.194    }
00:14:01.194  }'
00:14:01.194    22:41:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@310 -- # rpc_cmd bdev_get_bdevs
00:14:01.194    22:41:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@310 -- # jq -r '[.[] | select(.product_name == "crypto")] | length'
00:14:01.194    22:41:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:01.194    22:41:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:01.194    22:41:01 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:01.194   22:41:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@310 -- # [[ 1 -eq 1 ]]
00:14:01.194    22:41:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@312 -- # jq -r .driver_specific.crypto.key_name
00:14:01.453   22:41:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@312 -- # key_name=ea372130-13fc-49e4-ae01-2178ed0092b7_AES_CBC
00:14:01.453    22:41:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@313 -- # rpc_cmd accel_crypto_keys_get -k ea372130-13fc-49e4-ae01-2178ed0092b7_AES_CBC
00:14:01.453    22:41:02 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:01.453    22:41:02 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:01.453    22:41:02 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:01.453   22:41:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@313 -- # key_obj='[
00:14:01.453  {
00:14:01.453  "name": "ea372130-13fc-49e4-ae01-2178ed0092b7_AES_CBC",
00:14:01.453  "cipher": "AES_CBC",
00:14:01.453  "key": "1234567890abcdef1234567890abcdef"
00:14:01.453  }
00:14:01.453  ]'
00:14:01.453    22:41:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@314 -- # jq -r '.[0].key'
00:14:01.453   22:41:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@314 -- # [[ 1234567890abcdef1234567890abcdef == \1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f\1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f ]]
00:14:01.453    22:41:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@315 -- # jq -r '.[0].cipher'
00:14:01.453   22:41:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@315 -- # [[ AES_CBC == \A\E\S\_\C\B\C ]]
00:14:01.453   22:41:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@317 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:14:01.453   22:41:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:01.453    22:41:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:14:01.453    22:41:02 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:14:01.712  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:01.713  I0000 00:00:1733866862.328253  143666 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:01.713  I0000 00:00:1733866862.329933  143666 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:01.713  {}
00:14:01.713   22:41:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@318 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-0
00:14:01.713   22:41:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:01.972  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:01.972  I0000 00:00:1733866862.609976  143891 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:01.972  I0000 00:00:1733866862.611861  143891 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:01.972  {}
00:14:01.972    22:41:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@319 -- # rpc_cmd bdev_get_bdevs
00:14:01.972    22:41:02 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:01.972    22:41:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@319 -- # jq -r '.[] | select(.product_name == "crypto")'
00:14:01.972    22:41:02 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:01.972    22:41:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@319 -- # jq -r length
00:14:01.972    22:41:02 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:01.972   22:41:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@319 -- # [[ '' -eq 0 ]]
00:14:01.972   22:41:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@322 -- # device_vfio_user=1
00:14:01.972    22:41:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@323 -- # create_device 0 0
00:14:01.972    22:41:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=0
00:14:01.972    22:41:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@323 -- # jq -r .handle
00:14:01.972    22:41:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:14:01.972    22:41:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:02.231  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:02.231  I0000 00:00:1733866862.910644  143919 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:02.231  I0000 00:00:1733866862.912589  143919 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:02.231  [2024-12-10 22:41:02.918916] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist
00:14:02.490   22:41:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@323 -- # device0=nvme:nqn.2016-06.io.spdk:vfiouser-0
00:14:02.490   22:41:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@324 -- # attach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:14:02.490   22:41:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:02.490    22:41:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # uuid2base64 989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:14:02.490    22:41:03 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:14:02.490  [2024-12-10 22:41:03.175800] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-0: enabling controller
00:14:02.749  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:02.749  I0000 00:00:1733866863.333340  143942 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:02.749  I0000 00:00:1733866863.335075  143942 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:02.749  {}
00:14:02.749   22:41:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@327 -- # diff /dev/fd/62 /dev/fd/61
00:14:02.749    22:41:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@327 -- # jq --sort-keys
00:14:02.749    22:41:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@327 -- # get_qos_caps 1
00:14:02.749    22:41:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@327 -- # jq --sort-keys
00:14:02.749    22:41:03 sma.sma_vfiouser_qemu -- sma/common.sh@45 -- # local rootdir
00:14:02.749     22:41:03 sma.sma_vfiouser_qemu -- sma/common.sh@47 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:14:02.749    22:41:03 sma.sma_vfiouser_qemu -- sma/common.sh@47 -- # rootdir=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../..
00:14:02.749    22:41:03 sma.sma_vfiouser_qemu -- sma/common.sh@49 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py
00:14:03.008  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:03.008  I0000 00:00:1733866863.601975  144087 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:03.008  I0000 00:00:1733866863.603810  144087 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:03.008   22:41:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@340 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:03.008    22:41:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@340 -- # uuid2base64 989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:14:03.008    22:41:03 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:14:03.267  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:03.267  I0000 00:00:1733866863.922301  144199 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:03.267  I0000 00:00:1733866863.924113  144199 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:03.267  {}
00:14:03.267   22:41:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@359 -- # diff /dev/fd/62 /dev/fd/61
00:14:03.267    22:41:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@359 -- # rpc_cmd bdev_get_bdevs -b null0
00:14:03.267    22:41:03 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:03.267    22:41:03 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:03.267    22:41:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@359 -- # jq --sort-keys
00:14:03.267    22:41:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@359 -- # jq --sort-keys '.[].assigned_rate_limits'
00:14:03.267    22:41:03 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:03.267   22:41:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@370 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:14:03.267   22:41:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:03.267    22:41:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 989c825d-cb6d-4a6a-8407-8b67d82d2f97
00:14:03.267    22:41:04 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:14:03.526  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:03.526  I0000 00:00:1733866864.292150  144229 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:03.526  I0000 00:00:1733866864.294005  144229 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:03.786  {}
00:14:03.786   22:41:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@371 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-0
00:14:03.786   22:41:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:03.786  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:03.786  I0000 00:00:1733866864.564376  144255 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:03.786  I0000 00:00:1733866864.566379  144255 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:04.044  {}
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@373 -- # cleanup
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@98 -- # vm_kill_all
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- vhost/common.sh@476 -- # local vm
00:14:04.044    22:41:04 sma.sma_vfiouser_qemu -- vhost/common.sh@477 -- # vm_list_all
00:14:04.044    22:41:04 sma.sma_vfiouser_qemu -- vhost/common.sh@466 -- # vms=()
00:14:04.044    22:41:04 sma.sma_vfiouser_qemu -- vhost/common.sh@466 -- # local vms
00:14:04.044    22:41:04 sma.sma_vfiouser_qemu -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:14:04.044    22:41:04 sma.sma_vfiouser_qemu -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:14:04.044    22:41:04 sma.sma_vfiouser_qemu -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/0
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- vhost/common.sh@477 -- # for vm in $(vm_list_all)
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- vhost/common.sh@478 -- # vm_kill 0
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- vhost/common.sh@442 -- # vm_num_is_valid 0
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- vhost/common.sh@443 -- # local vm_dir=/root/vhost_test/vms/0
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- vhost/common.sh@445 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- vhost/common.sh@449 -- # local vm_pid
00:14:04.044    22:41:04 sma.sma_vfiouser_qemu -- vhost/common.sh@450 -- # cat /root/vhost_test/vms/0/qemu.pid
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- vhost/common.sh@450 -- # vm_pid=134983
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- vhost/common.sh@452 -- # notice 'Killing virtual machine /root/vhost_test/vms/0 (pid=134983)'
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- vhost/common.sh@94 -- # message INFO 'Killing virtual machine /root/vhost_test/vms/0 (pid=134983)'
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out=
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'INFO: Killing virtual machine /root/vhost_test/vms/0 (pid=134983)'
00:14:04.044  INFO: Killing virtual machine /root/vhost_test/vms/0 (pid=134983)
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- vhost/common.sh@454 -- # /bin/kill 134983
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- vhost/common.sh@455 -- # notice 'process 134983 killed'
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- vhost/common.sh@94 -- # message INFO 'process 134983 killed'
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out=
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'INFO: process 134983 killed'
00:14:04.044  INFO: process 134983 killed
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- vhost/common.sh@456 -- # rm -rf /root/vhost_test/vms/0
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- vhost/common.sh@481 -- # rm -rf /root/vhost_test/vms
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@99 -- # killprocess 138947
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@954 -- # '[' -z 138947 ']'
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@958 -- # kill -0 138947
00:14:04.044    22:41:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@959 -- # uname
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:04.044    22:41:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 138947
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@972 -- # echo 'killing process with pid 138947'
00:14:04.044  killing process with pid 138947
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@973 -- # kill 138947
00:14:04.044   22:41:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@978 -- # wait 138947
00:14:06.580   22:41:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@100 -- # killprocess 139180
00:14:06.580   22:41:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@954 -- # '[' -z 139180 ']'
00:14:06.580   22:41:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@958 -- # kill -0 139180
00:14:06.580    22:41:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@959 -- # uname
00:14:06.580   22:41:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:06.580    22:41:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 139180
00:14:06.580   22:41:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@960 -- # process_name=python3
00:14:06.580   22:41:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:14:06.580   22:41:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@972 -- # echo 'killing process with pid 139180'
00:14:06.580  killing process with pid 139180
00:14:06.580   22:41:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@973 -- # kill 139180
00:14:06.580   22:41:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@978 -- # wait 139180
00:14:06.580   22:41:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@101 -- # '[' -e /tmp/sma/vfio-user/qemu ']'
00:14:06.580   22:41:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@101 -- # rm -rf /tmp/sma/vfio-user/qemu
00:14:06.580   22:41:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@374 -- # trap - SIGINT SIGTERM EXIT
00:14:06.580  
00:14:06.580  real	0m50.681s
00:14:06.580  user	0m37.815s
00:14:06.580  sys	0m3.559s
00:14:06.580   22:41:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:06.580   22:41:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:06.580  ************************************
00:14:06.580  END TEST sma_vfiouser_qemu
00:14:06.580  ************************************
00:14:06.580   22:41:07 sma -- sma/sma.sh@13 -- # run_test sma_plugins /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins.sh
00:14:06.580   22:41:07 sma -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:06.580   22:41:07 sma -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:06.580   22:41:07 sma -- common/autotest_common.sh@10 -- # set +x
00:14:06.580  ************************************
00:14:06.580  START TEST sma_plugins
00:14:06.580  ************************************
00:14:06.580   22:41:07 sma.sma_plugins -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins.sh
00:14:06.580  * Looking for test storage...
00:14:06.580  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:14:06.580    22:41:07 sma.sma_plugins -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:14:06.580     22:41:07 sma.sma_plugins -- common/autotest_common.sh@1711 -- # lcov --version
00:14:06.580     22:41:07 sma.sma_plugins -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:14:06.580    22:41:07 sma.sma_plugins -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:14:06.580    22:41:07 sma.sma_plugins -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:14:06.580    22:41:07 sma.sma_plugins -- scripts/common.sh@333 -- # local ver1 ver1_l
00:14:06.580    22:41:07 sma.sma_plugins -- scripts/common.sh@334 -- # local ver2 ver2_l
00:14:06.580    22:41:07 sma.sma_plugins -- scripts/common.sh@336 -- # IFS=.-:
00:14:06.580    22:41:07 sma.sma_plugins -- scripts/common.sh@336 -- # read -ra ver1
00:14:06.580    22:41:07 sma.sma_plugins -- scripts/common.sh@337 -- # IFS=.-:
00:14:06.580    22:41:07 sma.sma_plugins -- scripts/common.sh@337 -- # read -ra ver2
00:14:06.580    22:41:07 sma.sma_plugins -- scripts/common.sh@338 -- # local 'op=<'
00:14:06.580    22:41:07 sma.sma_plugins -- scripts/common.sh@340 -- # ver1_l=2
00:14:06.580    22:41:07 sma.sma_plugins -- scripts/common.sh@341 -- # ver2_l=1
00:14:06.580    22:41:07 sma.sma_plugins -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:14:06.580    22:41:07 sma.sma_plugins -- scripts/common.sh@344 -- # case "$op" in
00:14:06.580    22:41:07 sma.sma_plugins -- scripts/common.sh@345 -- # : 1
00:14:06.580    22:41:07 sma.sma_plugins -- scripts/common.sh@364 -- # (( v = 0 ))
00:14:06.580    22:41:07 sma.sma_plugins -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:06.580     22:41:07 sma.sma_plugins -- scripts/common.sh@365 -- # decimal 1
00:14:06.580     22:41:07 sma.sma_plugins -- scripts/common.sh@353 -- # local d=1
00:14:06.580     22:41:07 sma.sma_plugins -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:06.580     22:41:07 sma.sma_plugins -- scripts/common.sh@355 -- # echo 1
00:14:06.580    22:41:07 sma.sma_plugins -- scripts/common.sh@365 -- # ver1[v]=1
00:14:06.580     22:41:07 sma.sma_plugins -- scripts/common.sh@366 -- # decimal 2
00:14:06.580     22:41:07 sma.sma_plugins -- scripts/common.sh@353 -- # local d=2
00:14:06.580     22:41:07 sma.sma_plugins -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:06.580     22:41:07 sma.sma_plugins -- scripts/common.sh@355 -- # echo 2
00:14:06.580    22:41:07 sma.sma_plugins -- scripts/common.sh@366 -- # ver2[v]=2
00:14:06.580    22:41:07 sma.sma_plugins -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:14:06.580    22:41:07 sma.sma_plugins -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:14:06.580    22:41:07 sma.sma_plugins -- scripts/common.sh@368 -- # return 0
00:14:06.580    22:41:07 sma.sma_plugins -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:06.580    22:41:07 sma.sma_plugins -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:14:06.580  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:06.580  		--rc genhtml_branch_coverage=1
00:14:06.580  		--rc genhtml_function_coverage=1
00:14:06.580  		--rc genhtml_legend=1
00:14:06.580  		--rc geninfo_all_blocks=1
00:14:06.580  		--rc geninfo_unexecuted_blocks=1
00:14:06.580  		
00:14:06.580  		'
00:14:06.580    22:41:07 sma.sma_plugins -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:14:06.580  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:06.580  		--rc genhtml_branch_coverage=1
00:14:06.580  		--rc genhtml_function_coverage=1
00:14:06.580  		--rc genhtml_legend=1
00:14:06.580  		--rc geninfo_all_blocks=1
00:14:06.580  		--rc geninfo_unexecuted_blocks=1
00:14:06.580  		
00:14:06.580  		'
00:14:06.580    22:41:07 sma.sma_plugins -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:14:06.580  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:06.580  		--rc genhtml_branch_coverage=1
00:14:06.580  		--rc genhtml_function_coverage=1
00:14:06.580  		--rc genhtml_legend=1
00:14:06.580  		--rc geninfo_all_blocks=1
00:14:06.580  		--rc geninfo_unexecuted_blocks=1
00:14:06.580  		
00:14:06.580  		'
00:14:06.580    22:41:07 sma.sma_plugins -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:14:06.580  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:06.580  		--rc genhtml_branch_coverage=1
00:14:06.580  		--rc genhtml_function_coverage=1
00:14:06.580  		--rc genhtml_legend=1
00:14:06.580  		--rc geninfo_all_blocks=1
00:14:06.580  		--rc geninfo_unexecuted_blocks=1
00:14:06.580  		
00:14:06.580  		'
00:14:06.580   22:41:07 sma.sma_plugins -- sma/plugins.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:14:06.580   22:41:07 sma.sma_plugins -- sma/plugins.sh@28 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:14:06.580   22:41:07 sma.sma_plugins -- sma/plugins.sh@31 -- # tgtpid=144959
00:14:06.580   22:41:07 sma.sma_plugins -- sma/plugins.sh@43 -- # smapid=144960
00:14:06.580   22:41:07 sma.sma_plugins -- sma/plugins.sh@30 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:14:06.580   22:41:07 sma.sma_plugins -- sma/plugins.sh@45 -- # sma_waitforlisten
00:14:06.580   22:41:07 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:14:06.580   22:41:07 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:14:06.580   22:41:07 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:14:06.580   22:41:07 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:14:06.580   22:41:07 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:14:06.580    22:41:07 sma.sma_plugins -- sma/plugins.sh@34 -- # cat
00:14:06.580   22:41:07 sma.sma_plugins -- sma/plugins.sh@34 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:14:06.580   22:41:07 sma.sma_plugins -- sma/plugins.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:14:06.580   22:41:07 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:14:06.840  [2024-12-10 22:41:07.394628] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:14:06.840  [2024-12-10 22:41:07.394765] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144959 ]
00:14:06.840  EAL: No free 2048 kB hugepages reported on node 1
00:14:06.840  [2024-12-10 22:41:07.534764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:07.099  [2024-12-10 22:41:07.672307] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:14:07.667   22:41:08 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:14:07.667   22:41:08 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:14:07.667   22:41:08 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:14:07.667   22:41:08 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:14:07.926  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:07.926  I0000 00:00:1733866868.666356  144960 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:08.862   22:41:09 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:14:08.862   22:41:09 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:14:08.862   22:41:09 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:14:08.862   22:41:09 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:14:08.862    22:41:09 sma.sma_plugins -- sma/plugins.sh@47 -- # create_device nvme
00:14:08.862    22:41:09 sma.sma_plugins -- sma/plugins.sh@47 -- # jq -r .handle
00:14:08.862    22:41:09 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:08.862  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:08.862  I0000 00:00:1733866869.559726  145227 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:08.862  I0000 00:00:1733866869.561562  145227 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:08.862   22:41:09 sma.sma_plugins -- sma/plugins.sh@47 -- # [[ nvme:plugin1-device1:nop == \n\v\m\e\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\1\:\n\o\p ]]
00:14:08.862    22:41:09 sma.sma_plugins -- sma/plugins.sh@48 -- # create_device nvmf_tcp
00:14:08.862    22:41:09 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:08.862    22:41:09 sma.sma_plugins -- sma/plugins.sh@48 -- # jq -r .handle
00:14:09.121  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:09.121  I0000 00:00:1733866869.789500  145436 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:09.121  I0000 00:00:1733866869.791205  145436 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:09.121   22:41:09 sma.sma_plugins -- sma/plugins.sh@48 -- # [[ nvmf_tcp:plugin1-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\2\:\n\o\p ]]
00:14:09.121   22:41:09 sma.sma_plugins -- sma/plugins.sh@50 -- # killprocess 144960
00:14:09.121   22:41:09 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 144960 ']'
00:14:09.121   22:41:09 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 144960
00:14:09.121    22:41:09 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:14:09.121   22:41:09 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:09.121    22:41:09 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 144960
00:14:09.121   22:41:09 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:14:09.121   22:41:09 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:14:09.121   22:41:09 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 144960'
00:14:09.121  killing process with pid 144960
00:14:09.121   22:41:09 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 144960
00:14:09.121   22:41:09 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 144960
00:14:09.121   22:41:09 sma.sma_plugins -- sma/plugins.sh@61 -- # smapid=145463
00:14:09.121   22:41:09 sma.sma_plugins -- sma/plugins.sh@62 -- # sma_waitforlisten
00:14:09.121   22:41:09 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:14:09.121   22:41:09 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:14:09.121   22:41:09 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:14:09.121   22:41:09 sma.sma_plugins -- sma/plugins.sh@53 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:14:09.121   22:41:09 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:14:09.121   22:41:09 sma.sma_plugins -- sma/plugins.sh@53 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:14:09.121   22:41:09 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:14:09.121    22:41:09 sma.sma_plugins -- sma/plugins.sh@53 -- # cat
00:14:09.380   22:41:09 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:14:09.380  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:09.380  I0000 00:00:1733866870.125257  145463 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:10.316   22:41:10 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:14:10.316   22:41:10 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:14:10.316   22:41:10 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:14:10.316   22:41:10 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:14:10.316    22:41:10 sma.sma_plugins -- sma/plugins.sh@64 -- # create_device nvmf_tcp
00:14:10.316    22:41:10 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:10.316    22:41:10 sma.sma_plugins -- sma/plugins.sh@64 -- # jq -r .handle
00:14:10.575  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:10.576  I0000 00:00:1733866871.161008  145700 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:10.576  I0000 00:00:1733866871.162890  145700 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:10.576   22:41:11 sma.sma_plugins -- sma/plugins.sh@64 -- # [[ nvmf_tcp:plugin1-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\2\:\n\o\p ]]
00:14:10.576   22:41:11 sma.sma_plugins -- sma/plugins.sh@65 -- # NOT create_device nvme
00:14:10.576   22:41:11 sma.sma_plugins -- common/autotest_common.sh@652 -- # local es=0
00:14:10.576   22:41:11 sma.sma_plugins -- common/autotest_common.sh@654 -- # valid_exec_arg create_device nvme
00:14:10.576   22:41:11 sma.sma_plugins -- common/autotest_common.sh@640 -- # local arg=create_device
00:14:10.576   22:41:11 sma.sma_plugins -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:10.576    22:41:11 sma.sma_plugins -- common/autotest_common.sh@644 -- # type -t create_device
00:14:10.576   22:41:11 sma.sma_plugins -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:10.576   22:41:11 sma.sma_plugins -- common/autotest_common.sh@655 -- # create_device nvme
00:14:10.576   22:41:11 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:10.835  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:10.835  I0000 00:00:1733866871.403074  145723 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:10.835  I0000 00:00:1733866871.404837  145723 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:10.835  Traceback (most recent call last):
00:14:10.835    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:14:10.835      main(sys.argv[1:])
00:14:10.835    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:14:10.835      result = client.call(request['method'], request.get('params', {}))
00:14:10.835               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:14:10.835    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:14:10.835      response = func(request=json_format.ParseDict(params, input()))
00:14:10.835                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:14:10.835    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:14:10.835      return _end_unary_response_blocking(state, call, False, None)
00:14:10.835             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:14:10.835    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:14:10.835      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:14:10.835      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:14:10.835  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:14:10.835  	status = StatusCode.INVALID_ARGUMENT
00:14:10.835  	details = "Unsupported device type"
00:14:10.835  	debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:8080 {grpc_message:"Unsupported device type", grpc_status:3, created_time:"2024-12-10T22:41:11.406689713+01:00"}"
00:14:10.835  >
00:14:10.835   22:41:11 sma.sma_plugins -- common/autotest_common.sh@655 -- # es=1
00:14:10.835   22:41:11 sma.sma_plugins -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:14:10.835   22:41:11 sma.sma_plugins -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:14:10.835   22:41:11 sma.sma_plugins -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:14:10.835   22:41:11 sma.sma_plugins -- sma/plugins.sh@67 -- # killprocess 145463
00:14:10.835   22:41:11 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 145463 ']'
00:14:10.835   22:41:11 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 145463
00:14:10.835    22:41:11 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:14:10.835   22:41:11 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:10.835    22:41:11 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 145463
00:14:10.835   22:41:11 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:14:10.835   22:41:11 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:14:10.835   22:41:11 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 145463'
00:14:10.835  killing process with pid 145463
00:14:10.835   22:41:11 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 145463
00:14:10.835   22:41:11 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 145463
00:14:10.835   22:41:11 sma.sma_plugins -- sma/plugins.sh@80 -- # smapid=145751
00:14:10.835   22:41:11 sma.sma_plugins -- sma/plugins.sh@81 -- # sma_waitforlisten
00:14:10.835   22:41:11 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:14:10.835   22:41:11 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:14:10.835   22:41:11 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:14:10.835   22:41:11 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:14:10.835   22:41:11 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:14:10.835   22:41:11 sma.sma_plugins -- sma/plugins.sh@70 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:14:10.835   22:41:11 sma.sma_plugins -- sma/plugins.sh@70 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:14:10.835    22:41:11 sma.sma_plugins -- sma/plugins.sh@70 -- # cat
00:14:10.835   22:41:11 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:14:11.094  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:11.094  I0000 00:00:1733866871.811535  145751 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:12.045   22:41:12 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:14:12.045   22:41:12 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:14:12.045   22:41:12 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:14:12.045   22:41:12 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:14:12.045    22:41:12 sma.sma_plugins -- sma/plugins.sh@83 -- # create_device nvme
00:14:12.045    22:41:12 sma.sma_plugins -- sma/plugins.sh@83 -- # jq -r .handle
00:14:12.045    22:41:12 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:12.045  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:12.045  I0000 00:00:1733866872.761387  145996 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:12.045  I0000 00:00:1733866872.763410  145996 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:12.045   22:41:12 sma.sma_plugins -- sma/plugins.sh@83 -- # [[ nvme:plugin1-device1:nop == \n\v\m\e\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\1\:\n\o\p ]]
00:14:12.045    22:41:12 sma.sma_plugins -- sma/plugins.sh@84 -- # create_device nvmf_tcp
00:14:12.045    22:41:12 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:12.045    22:41:12 sma.sma_plugins -- sma/plugins.sh@84 -- # jq -r .handle
00:14:12.323  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:12.323  I0000 00:00:1733866872.991027  146023 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:12.323  I0000 00:00:1733866872.992887  146023 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:12.323   22:41:13 sma.sma_plugins -- sma/plugins.sh@84 -- # [[ nvmf_tcp:plugin1-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\2\:\n\o\p ]]
00:14:12.323   22:41:13 sma.sma_plugins -- sma/plugins.sh@86 -- # killprocess 145751
00:14:12.323   22:41:13 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 145751 ']'
00:14:12.323   22:41:13 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 145751
00:14:12.323    22:41:13 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:14:12.323   22:41:13 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:12.323    22:41:13 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 145751
00:14:12.323   22:41:13 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:14:12.323   22:41:13 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:14:12.323   22:41:13 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 145751'
00:14:12.323  killing process with pid 145751
00:14:12.323   22:41:13 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 145751
00:14:12.323   22:41:13 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 145751
00:14:12.323   22:41:13 sma.sma_plugins -- sma/plugins.sh@99 -- # smapid=146188
00:14:12.323   22:41:13 sma.sma_plugins -- sma/plugins.sh@100 -- # sma_waitforlisten
00:14:12.323   22:41:13 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:14:12.323   22:41:13 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:14:12.323   22:41:13 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:14:12.323   22:41:13 sma.sma_plugins -- sma/plugins.sh@89 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:14:12.323   22:41:13 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:14:12.323   22:41:13 sma.sma_plugins -- sma/plugins.sh@89 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:14:12.323    22:41:13 sma.sma_plugins -- sma/plugins.sh@89 -- # cat
00:14:12.323   22:41:13 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:14:12.596   22:41:13 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:14:12.596  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:12.596  I0000 00:00:1733866873.310366  146188 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:13.621   22:41:14 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:14:13.621   22:41:14 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:14:13.621   22:41:14 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:14:13.621   22:41:14 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:14:13.621    22:41:14 sma.sma_plugins -- sma/plugins.sh@102 -- # create_device nvme
00:14:13.621    22:41:14 sma.sma_plugins -- sma/plugins.sh@102 -- # jq -r .handle
00:14:13.621    22:41:14 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:13.621  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:13.621  I0000 00:00:1733866874.366978  146298 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:13.621  I0000 00:00:1733866874.368624  146298 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:13.621   22:41:14 sma.sma_plugins -- sma/plugins.sh@102 -- # [[ nvme:plugin2-device1:nop == \n\v\m\e\:\p\l\u\g\i\n\2\-\d\e\v\i\c\e\1\:\n\o\p ]]
00:14:13.621    22:41:14 sma.sma_plugins -- sma/plugins.sh@103 -- # create_device nvmf_tcp
00:14:13.621    22:41:14 sma.sma_plugins -- sma/plugins.sh@103 -- # jq -r .handle
00:14:13.621    22:41:14 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:13.906  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:13.906  I0000 00:00:1733866874.590263  146515 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:13.906  I0000 00:00:1733866874.591556  146515 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:13.906   22:41:14 sma.sma_plugins -- sma/plugins.sh@103 -- # [[ nvmf_tcp:plugin2-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\2\-\d\e\v\i\c\e\2\:\n\o\p ]]
00:14:13.906   22:41:14 sma.sma_plugins -- sma/plugins.sh@105 -- # killprocess 146188
00:14:13.907   22:41:14 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 146188 ']'
00:14:13.907   22:41:14 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 146188
00:14:13.907    22:41:14 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:14:13.907   22:41:14 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:13.907    22:41:14 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 146188
00:14:13.907   22:41:14 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:14:13.907   22:41:14 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:14:13.907   22:41:14 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 146188'
00:14:13.907  killing process with pid 146188
00:14:13.907   22:41:14 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 146188
00:14:13.907   22:41:14 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 146188
00:14:14.198   22:41:14 sma.sma_plugins -- sma/plugins.sh@118 -- # smapid=146551
00:14:14.198   22:41:14 sma.sma_plugins -- sma/plugins.sh@108 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:14:14.198    22:41:14 sma.sma_plugins -- sma/plugins.sh@108 -- # cat
00:14:14.198   22:41:14 sma.sma_plugins -- sma/plugins.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:14:14.198   22:41:14 sma.sma_plugins -- sma/plugins.sh@119 -- # sma_waitforlisten
00:14:14.198   22:41:14 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:14:14.198   22:41:14 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:14:14.198   22:41:14 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:14:14.198   22:41:14 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:14:14.198   22:41:14 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:14:14.199   22:41:14 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:14:14.199  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:14.199  I0000 00:00:1733866874.897222  146551 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:15.265   22:41:15 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:14:15.265   22:41:15 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:14:15.265   22:41:15 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:14:15.265   22:41:15 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:14:15.265    22:41:15 sma.sma_plugins -- sma/plugins.sh@121 -- # create_device nvme
00:14:15.265    22:41:15 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:15.265    22:41:15 sma.sma_plugins -- sma/plugins.sh@121 -- # jq -r .handle
00:14:15.265  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:15.265  I0000 00:00:1733866875.929542  146798 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:15.265  I0000 00:00:1733866875.931223  146798 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:15.265   22:41:15 sma.sma_plugins -- sma/plugins.sh@121 -- # [[ nvme:plugin1-device1:nop == \n\v\m\e\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\1\:\n\o\p ]]
00:14:15.265    22:41:15 sma.sma_plugins -- sma/plugins.sh@122 -- # create_device nvmf_tcp
00:14:15.265    22:41:15 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:15.265    22:41:15 sma.sma_plugins -- sma/plugins.sh@122 -- # jq -r .handle
00:14:15.584  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:15.584  I0000 00:00:1733866876.176867  146826 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:15.584  I0000 00:00:1733866876.178613  146826 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:15.584   22:41:16 sma.sma_plugins -- sma/plugins.sh@122 -- # [[ nvmf_tcp:plugin2-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\2\-\d\e\v\i\c\e\2\:\n\o\p ]]
00:14:15.584   22:41:16 sma.sma_plugins -- sma/plugins.sh@124 -- # killprocess 146551
00:14:15.584   22:41:16 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 146551 ']'
00:14:15.584   22:41:16 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 146551
00:14:15.584    22:41:16 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:14:15.584   22:41:16 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:15.584    22:41:16 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 146551
00:14:15.584   22:41:16 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:14:15.584   22:41:16 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:14:15.584   22:41:16 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 146551'
00:14:15.584  killing process with pid 146551
00:14:15.584   22:41:16 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 146551
00:14:15.584   22:41:16 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 146551
00:14:15.584   22:41:16 sma.sma_plugins -- sma/plugins.sh@134 -- # smapid=146856
00:14:15.584   22:41:16 sma.sma_plugins -- sma/plugins.sh@127 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:14:15.584   22:41:16 sma.sma_plugins -- sma/plugins.sh@127 -- # SMA_PLUGINS=plugin1:plugin2
00:14:15.584   22:41:16 sma.sma_plugins -- sma/plugins.sh@135 -- # sma_waitforlisten
00:14:15.584    22:41:16 sma.sma_plugins -- sma/plugins.sh@127 -- # cat
00:14:15.584   22:41:16 sma.sma_plugins -- sma/plugins.sh@127 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:14:15.584   22:41:16 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:14:15.584   22:41:16 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:14:15.584   22:41:16 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:14:15.584   22:41:16 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:14:15.584   22:41:16 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:14:15.584   22:41:16 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:14:15.902  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:15.902  I0000 00:00:1733866876.481721  146856 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:16.568   22:41:17 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:14:16.568   22:41:17 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:14:16.568   22:41:17 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:14:16.568   22:41:17 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:14:16.568    22:41:17 sma.sma_plugins -- sma/plugins.sh@137 -- # create_device nvme
00:14:16.568    22:41:17 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:16.568    22:41:17 sma.sma_plugins -- sma/plugins.sh@137 -- # jq -r .handle
00:14:16.856  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:16.856  I0000 00:00:1733866877.515106  147101 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:16.856  I0000 00:00:1733866877.517020  147101 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:16.856   22:41:17 sma.sma_plugins -- sma/plugins.sh@137 -- # [[ nvme:plugin1-device1:nop == \n\v\m\e\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\1\:\n\o\p ]]
00:14:16.856    22:41:17 sma.sma_plugins -- sma/plugins.sh@138 -- # create_device nvmf_tcp
00:14:16.856    22:41:17 sma.sma_plugins -- sma/plugins.sh@138 -- # jq -r .handle
00:14:16.856    22:41:17 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:17.134  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:17.134  I0000 00:00:1733866877.735686  147130 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:17.134  I0000 00:00:1733866877.737215  147130 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:17.134   22:41:17 sma.sma_plugins -- sma/plugins.sh@138 -- # [[ nvmf_tcp:plugin2-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\2\-\d\e\v\i\c\e\2\:\n\o\p ]]
00:14:17.134   22:41:17 sma.sma_plugins -- sma/plugins.sh@140 -- # killprocess 146856
00:14:17.134   22:41:17 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 146856 ']'
00:14:17.134   22:41:17 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 146856
00:14:17.134    22:41:17 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:14:17.134   22:41:17 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:17.134    22:41:17 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 146856
00:14:17.134   22:41:17 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:14:17.134   22:41:17 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:14:17.134   22:41:17 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 146856'
00:14:17.134  killing process with pid 146856
00:14:17.134   22:41:17 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 146856
00:14:17.134   22:41:17 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 146856
00:14:17.134   22:41:17 sma.sma_plugins -- sma/plugins.sh@152 -- # smapid=147292
00:14:17.134   22:41:17 sma.sma_plugins -- sma/plugins.sh@153 -- # sma_waitforlisten
00:14:17.134   22:41:17 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:14:17.134   22:41:17 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:14:17.134   22:41:17 sma.sma_plugins -- sma/plugins.sh@143 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:14:17.134    22:41:17 sma.sma_plugins -- sma/plugins.sh@143 -- # cat
00:14:17.134   22:41:17 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:14:17.134   22:41:17 sma.sma_plugins -- sma/plugins.sh@143 -- # SMA_PLUGINS=plugin1
00:14:17.134   22:41:17 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:14:17.134   22:41:17 sma.sma_plugins -- sma/plugins.sh@143 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:14:17.134   22:41:17 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:14:17.134   22:41:17 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:14:17.419  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:17.419  I0000 00:00:1733866878.058188  147292 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:18.099   22:41:18 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:14:18.099   22:41:18 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:14:18.099   22:41:18 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:14:18.439   22:41:18 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:14:18.439    22:41:18 sma.sma_plugins -- sma/plugins.sh@155 -- # create_device nvme
00:14:18.439    22:41:18 sma.sma_plugins -- sma/plugins.sh@155 -- # jq -r .handle
00:14:18.439    22:41:18 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:18.439  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:18.439  I0000 00:00:1733866879.101304  147413 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:18.439  I0000 00:00:1733866879.102949  147413 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:18.439   22:41:19 sma.sma_plugins -- sma/plugins.sh@155 -- # [[ nvme:plugin1-device1:nop == \n\v\m\e\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\1\:\n\o\p ]]
00:14:18.439    22:41:19 sma.sma_plugins -- sma/plugins.sh@156 -- # create_device nvmf_tcp
00:14:18.439    22:41:19 sma.sma_plugins -- sma/plugins.sh@156 -- # jq -r .handle
00:14:18.439    22:41:19 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:18.805  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:18.806  I0000 00:00:1733866879.333540  147630 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:18.806  I0000 00:00:1733866879.335221  147630 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:18.806   22:41:19 sma.sma_plugins -- sma/plugins.sh@156 -- # [[ nvmf_tcp:plugin2-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\2\-\d\e\v\i\c\e\2\:\n\o\p ]]
00:14:18.806   22:41:19 sma.sma_plugins -- sma/plugins.sh@158 -- # killprocess 147292
00:14:18.806   22:41:19 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 147292 ']'
00:14:18.806   22:41:19 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 147292
00:14:18.806    22:41:19 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:14:18.806   22:41:19 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:18.806    22:41:19 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 147292
00:14:18.806   22:41:19 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:14:18.806   22:41:19 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:14:18.806   22:41:19 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 147292'
00:14:18.806  killing process with pid 147292
00:14:18.806   22:41:19 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 147292
00:14:18.806   22:41:19 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 147292
00:14:18.806   22:41:19 sma.sma_plugins -- sma/plugins.sh@161 -- # crypto_engines=(crypto-plugin1 crypto-plugin2)
00:14:18.806   22:41:19 sma.sma_plugins -- sma/plugins.sh@162 -- # for crypto in "${crypto_engines[@]}"
00:14:18.806   22:41:19 sma.sma_plugins -- sma/plugins.sh@175 -- # smapid=147664
00:14:18.806   22:41:19 sma.sma_plugins -- sma/plugins.sh@176 -- # sma_waitforlisten
00:14:18.806   22:41:19 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:14:18.806   22:41:19 sma.sma_plugins -- sma/plugins.sh@163 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:14:18.806   22:41:19 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:14:18.806   22:41:19 sma.sma_plugins -- sma/plugins.sh@163 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:14:18.806   22:41:19 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:14:18.806    22:41:19 sma.sma_plugins -- sma/plugins.sh@163 -- # cat
00:14:18.806   22:41:19 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:14:18.806   22:41:19 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:14:18.806   22:41:19 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:14:19.131  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:19.131  I0000 00:00:1733866879.676649  147664 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:19.807   22:41:20 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:14:19.807   22:41:20 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:14:19.807   22:41:20 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:14:19.807   22:41:20 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:14:19.807    22:41:20 sma.sma_plugins -- sma/plugins.sh@178 -- # create_device nvme
00:14:19.807    22:41:20 sma.sma_plugins -- sma/plugins.sh@178 -- # jq -r .handle
00:14:19.807    22:41:20 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:20.075  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:20.075  I0000 00:00:1733866880.711570  147909 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:20.075  I0000 00:00:1733866880.713625  147909 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:20.075   22:41:20 sma.sma_plugins -- sma/plugins.sh@178 -- # [[ nvme:plugin1-device1:crypto-plugin1 == nvme:plugin1-device1:crypto-plugin1 ]]
00:14:20.075    22:41:20 sma.sma_plugins -- sma/plugins.sh@179 -- # create_device nvmf_tcp
00:14:20.075    22:41:20 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:20.075    22:41:20 sma.sma_plugins -- sma/plugins.sh@179 -- # jq -r .handle
00:14:20.334  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:20.334  I0000 00:00:1733866880.954654  147943 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:20.334  I0000 00:00:1733866880.956244  147943 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:20.334   22:41:20 sma.sma_plugins -- sma/plugins.sh@179 -- # [[ nvmf_tcp:plugin2-device2:crypto-plugin1 == nvmf_tcp:plugin2-device2:crypto-plugin1 ]]
00:14:20.334   22:41:20 sma.sma_plugins -- sma/plugins.sh@181 -- # killprocess 147664
00:14:20.334   22:41:20 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 147664 ']'
00:14:20.334   22:41:20 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 147664
00:14:20.334    22:41:20 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:14:20.334   22:41:20 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:20.334    22:41:20 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 147664
00:14:20.334   22:41:21 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:14:20.334   22:41:21 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:14:20.334   22:41:21 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 147664'
00:14:20.334  killing process with pid 147664
00:14:20.334   22:41:21 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 147664
00:14:20.334   22:41:21 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 147664
00:14:20.334   22:41:21 sma.sma_plugins -- sma/plugins.sh@162 -- # for crypto in "${crypto_engines[@]}"
00:14:20.334   22:41:21 sma.sma_plugins -- sma/plugins.sh@175 -- # smapid=147972
00:14:20.334   22:41:21 sma.sma_plugins -- sma/plugins.sh@163 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:14:20.334   22:41:21 sma.sma_plugins -- sma/plugins.sh@163 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:14:20.334   22:41:21 sma.sma_plugins -- sma/plugins.sh@176 -- # sma_waitforlisten
00:14:20.334    22:41:21 sma.sma_plugins -- sma/plugins.sh@163 -- # cat
00:14:20.334   22:41:21 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:14:20.334   22:41:21 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:14:20.334   22:41:21 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:14:20.334   22:41:21 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:14:20.334   22:41:21 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:14:20.334   22:41:21 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:14:20.593  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:20.593  I0000 00:00:1733866881.265070  147972 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:21.529   22:41:22 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:14:21.529   22:41:22 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:14:21.529   22:41:22 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:14:21.529   22:41:22 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:14:21.529    22:41:22 sma.sma_plugins -- sma/plugins.sh@178 -- # create_device nvme
00:14:21.529    22:41:22 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:21.529    22:41:22 sma.sma_plugins -- sma/plugins.sh@178 -- # jq -r .handle
00:14:21.529  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:21.529  I0000 00:00:1733866882.300886  148208 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:21.529  I0000 00:00:1733866882.302791  148208 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:21.788   22:41:22 sma.sma_plugins -- sma/plugins.sh@178 -- # [[ nvme:plugin1-device1:crypto-plugin2 == nvme:plugin1-device1:crypto-plugin2 ]]
00:14:21.788    22:41:22 sma.sma_plugins -- sma/plugins.sh@179 -- # create_device nvmf_tcp
00:14:21.788    22:41:22 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:21.788    22:41:22 sma.sma_plugins -- sma/plugins.sh@179 -- # jq -r .handle
00:14:21.788  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:21.788  I0000 00:00:1733866882.527109  148233 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:21.788  I0000 00:00:1733866882.528763  148233 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:21.788   22:41:22 sma.sma_plugins -- sma/plugins.sh@179 -- # [[ nvmf_tcp:plugin2-device2:crypto-plugin2 == nvmf_tcp:plugin2-device2:crypto-plugin2 ]]
00:14:21.788   22:41:22 sma.sma_plugins -- sma/plugins.sh@181 -- # killprocess 147972
00:14:21.788   22:41:22 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 147972 ']'
00:14:21.788   22:41:22 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 147972
00:14:21.788    22:41:22 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:14:21.788   22:41:22 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:21.788    22:41:22 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 147972
00:14:22.047   22:41:22 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:14:22.047   22:41:22 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:14:22.047   22:41:22 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 147972'
00:14:22.047  killing process with pid 147972
00:14:22.047   22:41:22 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 147972
00:14:22.047   22:41:22 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 147972
00:14:22.047   22:41:22 sma.sma_plugins -- sma/plugins.sh@184 -- # cleanup
00:14:22.047   22:41:22 sma.sma_plugins -- sma/plugins.sh@13 -- # killprocess 144959
00:14:22.047   22:41:22 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 144959 ']'
00:14:22.047   22:41:22 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 144959
00:14:22.047    22:41:22 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:14:22.047   22:41:22 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:22.047    22:41:22 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 144959
00:14:22.047   22:41:22 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:14:22.047   22:41:22 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:14:22.047   22:41:22 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 144959'
00:14:22.047  killing process with pid 144959
00:14:22.047   22:41:22 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 144959
00:14:22.047   22:41:22 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 144959
00:14:24.579   22:41:25 sma.sma_plugins -- sma/plugins.sh@14 -- # killprocess 147972
00:14:24.579   22:41:25 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 147972 ']'
00:14:24.579   22:41:25 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 147972
00:14:24.579  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (147972) - No such process
00:14:24.579   22:41:25 sma.sma_plugins -- common/autotest_common.sh@981 -- # echo 'Process with pid 147972 is not found'
00:14:24.579  Process with pid 147972 is not found
00:14:24.579   22:41:25 sma.sma_plugins -- sma/plugins.sh@185 -- # trap - SIGINT SIGTERM EXIT
00:14:24.579  
00:14:24.579  real	0m18.164s
00:14:24.579  user	0m24.212s
00:14:24.579  sys	0m1.945s
00:14:24.579   22:41:25 sma.sma_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:24.579   22:41:25 sma.sma_plugins -- common/autotest_common.sh@10 -- # set +x
00:14:24.579  ************************************
00:14:24.579  END TEST sma_plugins
00:14:24.579  ************************************
00:14:24.579   22:41:25 sma -- sma/sma.sh@14 -- # run_test sma_discovery /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/discovery.sh
00:14:24.579   22:41:25 sma -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:24.579   22:41:25 sma -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:24.579   22:41:25 sma -- common/autotest_common.sh@10 -- # set +x
00:14:24.579  ************************************
00:14:24.579  START TEST sma_discovery
00:14:24.579  ************************************
00:14:24.579   22:41:25 sma.sma_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/discovery.sh
00:14:24.838  * Looking for test storage...
00:14:24.838  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:14:24.838    22:41:25 sma.sma_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:14:24.838     22:41:25 sma.sma_discovery -- common/autotest_common.sh@1711 -- # lcov --version
00:14:24.838     22:41:25 sma.sma_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:14:24.838    22:41:25 sma.sma_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:14:24.838    22:41:25 sma.sma_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:14:24.838    22:41:25 sma.sma_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l
00:14:24.838    22:41:25 sma.sma_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l
00:14:24.838    22:41:25 sma.sma_discovery -- scripts/common.sh@336 -- # IFS=.-:
00:14:24.838    22:41:25 sma.sma_discovery -- scripts/common.sh@336 -- # read -ra ver1
00:14:24.838    22:41:25 sma.sma_discovery -- scripts/common.sh@337 -- # IFS=.-:
00:14:24.838    22:41:25 sma.sma_discovery -- scripts/common.sh@337 -- # read -ra ver2
00:14:24.838    22:41:25 sma.sma_discovery -- scripts/common.sh@338 -- # local 'op=<'
00:14:24.838    22:41:25 sma.sma_discovery -- scripts/common.sh@340 -- # ver1_l=2
00:14:24.838    22:41:25 sma.sma_discovery -- scripts/common.sh@341 -- # ver2_l=1
00:14:24.838    22:41:25 sma.sma_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:14:24.838    22:41:25 sma.sma_discovery -- scripts/common.sh@344 -- # case "$op" in
00:14:24.838    22:41:25 sma.sma_discovery -- scripts/common.sh@345 -- # : 1
00:14:24.838    22:41:25 sma.sma_discovery -- scripts/common.sh@364 -- # (( v = 0 ))
00:14:24.838    22:41:25 sma.sma_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:24.838     22:41:25 sma.sma_discovery -- scripts/common.sh@365 -- # decimal 1
00:14:24.838     22:41:25 sma.sma_discovery -- scripts/common.sh@353 -- # local d=1
00:14:24.838     22:41:25 sma.sma_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:24.838     22:41:25 sma.sma_discovery -- scripts/common.sh@355 -- # echo 1
00:14:24.838    22:41:25 sma.sma_discovery -- scripts/common.sh@365 -- # ver1[v]=1
00:14:24.838     22:41:25 sma.sma_discovery -- scripts/common.sh@366 -- # decimal 2
00:14:24.838     22:41:25 sma.sma_discovery -- scripts/common.sh@353 -- # local d=2
00:14:24.838     22:41:25 sma.sma_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:24.838     22:41:25 sma.sma_discovery -- scripts/common.sh@355 -- # echo 2
00:14:24.838    22:41:25 sma.sma_discovery -- scripts/common.sh@366 -- # ver2[v]=2
00:14:24.838    22:41:25 sma.sma_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:14:24.838    22:41:25 sma.sma_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:14:24.838    22:41:25 sma.sma_discovery -- scripts/common.sh@368 -- # return 0
00:14:24.838    22:41:25 sma.sma_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:24.838    22:41:25 sma.sma_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:14:24.838  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:24.838  		--rc genhtml_branch_coverage=1
00:14:24.838  		--rc genhtml_function_coverage=1
00:14:24.838  		--rc genhtml_legend=1
00:14:24.838  		--rc geninfo_all_blocks=1
00:14:24.838  		--rc geninfo_unexecuted_blocks=1
00:14:24.838  		
00:14:24.838  		'
00:14:24.838    22:41:25 sma.sma_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:14:24.838  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:24.838  		--rc genhtml_branch_coverage=1
00:14:24.838  		--rc genhtml_function_coverage=1
00:14:24.838  		--rc genhtml_legend=1
00:14:24.838  		--rc geninfo_all_blocks=1
00:14:24.838  		--rc geninfo_unexecuted_blocks=1
00:14:24.838  		
00:14:24.838  		'
00:14:24.838    22:41:25 sma.sma_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:14:24.838  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:24.838  		--rc genhtml_branch_coverage=1
00:14:24.838  		--rc genhtml_function_coverage=1
00:14:24.838  		--rc genhtml_legend=1
00:14:24.838  		--rc geninfo_all_blocks=1
00:14:24.838  		--rc geninfo_unexecuted_blocks=1
00:14:24.838  		
00:14:24.838  		'
00:14:24.838    22:41:25 sma.sma_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:14:24.838  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:24.838  		--rc genhtml_branch_coverage=1
00:14:24.838  		--rc genhtml_function_coverage=1
00:14:24.838  		--rc genhtml_legend=1
00:14:24.838  		--rc geninfo_all_blocks=1
00:14:24.838  		--rc geninfo_unexecuted_blocks=1
00:14:24.838  		
00:14:24.838  		'
00:14:24.838   22:41:25 sma.sma_discovery -- sma/discovery.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:14:24.838   22:41:25 sma.sma_discovery -- sma/discovery.sh@12 -- # sma_py=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:24.838   22:41:25 sma.sma_discovery -- sma/discovery.sh@13 -- # rpc_py=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:14:24.838   22:41:25 sma.sma_discovery -- sma/discovery.sh@15 -- # t1sock=/var/tmp/spdk.sock1
00:14:24.838   22:41:25 sma.sma_discovery -- sma/discovery.sh@16 -- # t2sock=/var/tmp/spdk.sock2
00:14:24.838   22:41:25 sma.sma_discovery -- sma/discovery.sh@17 -- # invalid_port=8008
00:14:24.838   22:41:25 sma.sma_discovery -- sma/discovery.sh@18 -- # t1dscport=8009
00:14:24.838   22:41:25 sma.sma_discovery -- sma/discovery.sh@19 -- # t2dscport1=8010
00:14:24.838   22:41:25 sma.sma_discovery -- sma/discovery.sh@20 -- # t2dscport2=8011
00:14:24.838   22:41:25 sma.sma_discovery -- sma/discovery.sh@21 -- # t1nqn=nqn.2016-06.io.spdk:node1
00:14:24.838   22:41:25 sma.sma_discovery -- sma/discovery.sh@22 -- # t2nqn=nqn.2016-06.io.spdk:node2
00:14:24.838   22:41:25 sma.sma_discovery -- sma/discovery.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host0
00:14:24.838   22:41:25 sma.sma_discovery -- sma/discovery.sh@24 -- # cleanup_period=1
00:14:24.838   22:41:25 sma.sma_discovery -- sma/discovery.sh@132 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:14:24.838   22:41:25 sma.sma_discovery -- sma/discovery.sh@135 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/spdk.sock1 -m 0x1
00:14:24.838   22:41:25 sma.sma_discovery -- sma/discovery.sh@136 -- # t1pid=148931
00:14:24.838   22:41:25 sma.sma_discovery -- sma/discovery.sh@138 -- # t2pid=148932
00:14:24.838   22:41:25 sma.sma_discovery -- sma/discovery.sh@137 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/spdk.sock2 -m 0x2
00:14:24.838   22:41:25 sma.sma_discovery -- sma/discovery.sh@142 -- # tgtpid=148933
00:14:24.838   22:41:25 sma.sma_discovery -- sma/discovery.sh@141 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x4
00:14:24.838   22:41:25 sma.sma_discovery -- sma/discovery.sh@153 -- # smapid=148934
00:14:24.838   22:41:25 sma.sma_discovery -- sma/discovery.sh@155 -- # waitforlisten 148933
00:14:24.838   22:41:25 sma.sma_discovery -- common/autotest_common.sh@835 -- # '[' -z 148933 ']'
00:14:24.838   22:41:25 sma.sma_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:14:24.838   22:41:25 sma.sma_discovery -- common/autotest_common.sh@840 -- # local max_retries=100
00:14:24.838   22:41:25 sma.sma_discovery -- sma/discovery.sh@145 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:14:24.838   22:41:25 sma.sma_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:14:24.838  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:14:24.838    22:41:25 sma.sma_discovery -- sma/discovery.sh@145 -- # cat
00:14:24.838   22:41:25 sma.sma_discovery -- common/autotest_common.sh@844 -- # xtrace_disable
00:14:24.838   22:41:25 sma.sma_discovery -- common/autotest_common.sh@10 -- # set +x
00:14:24.838  [2024-12-10 22:41:25.575401] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:14:24.838  [2024-12-10 22:41:25.575398] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:14:24.838  [2024-12-10 22:41:25.575514] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal[2024-12-10 22:41:25.575514] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148933 ]
00:14:24.838  file-prefix=spdk_pid148931 ]
00:14:24.838  [2024-12-10 22:41:25.600249] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:14:24.838  [2024-12-10 22:41:25.600391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148932 ]
00:14:25.097  EAL: No free 2048 kB hugepages reported on node 1
00:14:25.097  EAL: No free 2048 kB hugepages reported on node 1
00:14:25.097  EAL: No free 2048 kB hugepages reported on node 1
00:14:25.097  [2024-12-10 22:41:25.703808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:25.097  [2024-12-10 22:41:25.719194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:25.097  [2024-12-10 22:41:25.754538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:25.097  [2024-12-10 22:41:25.818617] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2
00:14:25.097  [2024-12-10 22:41:25.874066] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:14:25.355  [2024-12-10 22:41:25.906022] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:14:25.922   22:41:26 sma.sma_discovery -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:14:25.922   22:41:26 sma.sma_discovery -- common/autotest_common.sh@868 -- # return 0
00:14:25.922   22:41:26 sma.sma_discovery -- sma/discovery.sh@156 -- # waitforlisten 148931 /var/tmp/spdk.sock1
00:14:25.922   22:41:26 sma.sma_discovery -- common/autotest_common.sh@835 -- # '[' -z 148931 ']'
00:14:25.922   22:41:26 sma.sma_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock1
00:14:25.922   22:41:26 sma.sma_discovery -- common/autotest_common.sh@840 -- # local max_retries=100
00:14:25.922   22:41:26 sma.sma_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock1...'
00:14:25.922  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock1...
00:14:25.922   22:41:26 sma.sma_discovery -- common/autotest_common.sh@844 -- # xtrace_disable
00:14:25.922   22:41:26 sma.sma_discovery -- common/autotest_common.sh@10 -- # set +x
00:14:26.180  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:26.180  I0000 00:00:1733866886.727718  148934 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:26.180  [2024-12-10 22:41:26.739144] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:14:26.180   22:41:26 sma.sma_discovery -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:14:26.180   22:41:26 sma.sma_discovery -- common/autotest_common.sh@868 -- # return 0
00:14:26.180   22:41:26 sma.sma_discovery -- sma/discovery.sh@157 -- # waitforlisten 148932 /var/tmp/spdk.sock2
00:14:26.180   22:41:26 sma.sma_discovery -- common/autotest_common.sh@835 -- # '[' -z 148932 ']'
00:14:26.180   22:41:26 sma.sma_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock2
00:14:26.180   22:41:26 sma.sma_discovery -- common/autotest_common.sh@840 -- # local max_retries=100
00:14:26.180   22:41:26 sma.sma_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock2...'
00:14:26.180  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock2...
00:14:26.180   22:41:26 sma.sma_discovery -- common/autotest_common.sh@844 -- # xtrace_disable
00:14:26.180   22:41:26 sma.sma_discovery -- common/autotest_common.sh@10 -- # set +x
00:14:26.438   22:41:27 sma.sma_discovery -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:14:26.438   22:41:27 sma.sma_discovery -- common/autotest_common.sh@868 -- # return 0
00:14:26.438    22:41:27 sma.sma_discovery -- sma/discovery.sh@162 -- # uuidgen
00:14:26.438   22:41:27 sma.sma_discovery -- sma/discovery.sh@162 -- # t1uuid=625d712a-d587-4298-969c-4d5e0e737dbd
00:14:26.438    22:41:27 sma.sma_discovery -- sma/discovery.sh@163 -- # uuidgen
00:14:26.438   22:41:27 sma.sma_discovery -- sma/discovery.sh@163 -- # t2uuid=4b56f4c2-4d3b-460d-b5cc-43381ec17344
00:14:26.438    22:41:27 sma.sma_discovery -- sma/discovery.sh@164 -- # uuidgen
00:14:26.438   22:41:27 sma.sma_discovery -- sma/discovery.sh@164 -- # t2uuid2=661d563c-e122-4df4-b815-d375358c3b20
00:14:26.438   22:41:27 sma.sma_discovery -- sma/discovery.sh@166 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock1
00:14:26.696  [2024-12-10 22:41:27.342275] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:14:26.697  [2024-12-10 22:41:27.382694] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:14:26.697  [2024-12-10 22:41:27.390590] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 8009 ***
00:14:26.697  null0
00:14:26.697   22:41:27 sma.sma_discovery -- sma/discovery.sh@176 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock2
00:14:26.955  [2024-12-10 22:41:27.606029] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:14:26.955  [2024-12-10 22:41:27.662529] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 ***
00:14:26.955  [2024-12-10 22:41:27.670481] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 8010 ***
00:14:26.955  [2024-12-10 22:41:27.678524] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 8011 ***
00:14:26.955  null0
00:14:26.955  null1
00:14:26.955   22:41:27 sma.sma_discovery -- sma/discovery.sh@190 -- # sma_waitforlisten
00:14:26.955   22:41:27 sma.sma_discovery -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:14:26.955   22:41:27 sma.sma_discovery -- sma/common.sh@8 -- # local sma_port=8080
00:14:26.955   22:41:27 sma.sma_discovery -- sma/common.sh@10 -- # (( i = 0 ))
00:14:26.955   22:41:27 sma.sma_discovery -- sma/common.sh@10 -- # (( i < 5 ))
00:14:26.955   22:41:27 sma.sma_discovery -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:14:26.955   22:41:27 sma.sma_discovery -- sma/common.sh@12 -- # return 0
00:14:26.955   22:41:27 sma.sma_discovery -- sma/discovery.sh@192 -- # localnqn=nqn.2016-06.io.spdk:local0
00:14:26.955    22:41:27 sma.sma_discovery -- sma/discovery.sh@195 -- # create_device nqn.2016-06.io.spdk:local0
00:14:26.955    22:41:27 sma.sma_discovery -- sma/discovery.sh@69 -- # local nqn=nqn.2016-06.io.spdk:local0
00:14:26.955    22:41:27 sma.sma_discovery -- sma/discovery.sh@195 -- # jq -r .handle
00:14:26.955    22:41:27 sma.sma_discovery -- sma/discovery.sh@70 -- # local volume_id=
00:14:26.955    22:41:27 sma.sma_discovery -- sma/discovery.sh@71 -- # local volume=
00:14:26.955    22:41:27 sma.sma_discovery -- sma/discovery.sh@73 -- # shift
00:14:26.955    22:41:27 sma.sma_discovery -- sma/discovery.sh@74 -- # [[ -n '' ]]
00:14:26.955    22:41:27 sma.sma_discovery -- sma/discovery.sh@78 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:27.213  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:27.213  I0000 00:00:1733866887.925008  149392 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:27.213  I0000 00:00:1733866887.926738  149392 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:27.213  [2024-12-10 22:41:27.946732] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 ***
00:14:27.213   22:41:27 sma.sma_discovery -- sma/discovery.sh@195 -- # device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:14:27.213   22:41:27 sma.sma_discovery -- sma/discovery.sh@198 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:14:27.472  [
00:14:27.472    {
00:14:27.472      "nqn": "nqn.2016-06.io.spdk:local0",
00:14:27.472      "subtype": "NVMe",
00:14:27.472      "listen_addresses": [
00:14:27.472        {
00:14:27.472          "trtype": "TCP",
00:14:27.472          "adrfam": "IPv4",
00:14:27.472          "traddr": "127.0.0.1",
00:14:27.472          "trsvcid": "4419"
00:14:27.472        }
00:14:27.472      ],
00:14:27.472      "allow_any_host": false,
00:14:27.472      "hosts": [],
00:14:27.472      "serial_number": "00000000000000000000",
00:14:27.472      "model_number": "SPDK bdev Controller",
00:14:27.472      "max_namespaces": 32,
00:14:27.472      "min_cntlid": 1,
00:14:27.472      "max_cntlid": 65519,
00:14:27.472      "namespaces": []
00:14:27.472    }
00:14:27.472  ]
00:14:27.472   22:41:28 sma.sma_discovery -- sma/discovery.sh@201 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 625d712a-d587-4298-969c-4d5e0e737dbd 8009 8010
00:14:27.472   22:41:28 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:14:27.472   22:41:28 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:14:27.472   22:41:28 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:27.472    22:41:28 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 625d712a-d587-4298-969c-4d5e0e737dbd 8009 8010
00:14:27.472    22:41:28 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=625d712a-d587-4298-969c-4d5e0e737dbd
00:14:27.472    22:41:28 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:14:27.472    22:41:28 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:14:27.472     22:41:28 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 625d712a-d587-4298-969c-4d5e0e737dbd
00:14:27.472     22:41:28 sma.sma_discovery -- sma/common.sh@20 -- # python
00:14:27.472     22:41:28 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8009 8010
00:14:27.472     22:41:28 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8009' '8010')
00:14:27.472     22:41:28 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:14:27.472     22:41:28 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:14:27.472     22:41:28 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:14:27.472     22:41:28 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:14:27.472     22:41:28 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:14:27.472     22:41:28 sma.sma_discovery -- sma/discovery.sh@44 -- # echo ,
00:14:27.472     22:41:28 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:14:27.472     22:41:28 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:14:27.472     22:41:28 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:14:27.472     22:41:28 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:14:27.472     22:41:28 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:14:27.472     22:41:28 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:14:27.731  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:27.731  I0000 00:00:1733866888.412962  149422 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:27.731  I0000 00:00:1733866888.414920  149422 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:30.264  {}
00:14:30.264    22:41:30 sma.sma_discovery -- sma/discovery.sh@204 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:14:30.264    22:41:30 sma.sma_discovery -- sma/discovery.sh@204 -- # jq -r '. | length'
00:14:30.264   22:41:30 sma.sma_discovery -- sma/discovery.sh@204 -- # [[ 2 -eq 2 ]]
00:14:30.264   22:41:30 sma.sma_discovery -- sma/discovery.sh@206 -- # jq -r '.[].trid.trsvcid'
00:14:30.264   22:41:30 sma.sma_discovery -- sma/discovery.sh@206 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:14:30.264   22:41:30 sma.sma_discovery -- sma/discovery.sh@206 -- # grep 8009
00:14:30.522  8009
00:14:30.522   22:41:31 sma.sma_discovery -- sma/discovery.sh@207 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:14:30.522   22:41:31 sma.sma_discovery -- sma/discovery.sh@207 -- # jq -r '.[].trid.trsvcid'
00:14:30.522   22:41:31 sma.sma_discovery -- sma/discovery.sh@207 -- # grep 8010
00:14:30.780  8010
00:14:30.780    22:41:31 sma.sma_discovery -- sma/discovery.sh@210 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:14:30.780    22:41:31 sma.sma_discovery -- sma/discovery.sh@210 -- # jq -r '.[].namespaces | length'
00:14:31.039   22:41:31 sma.sma_discovery -- sma/discovery.sh@210 -- # [[ 1 -eq 1 ]]
00:14:31.039    22:41:31 sma.sma_discovery -- sma/discovery.sh@211 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:14:31.039    22:41:31 sma.sma_discovery -- sma/discovery.sh@211 -- # jq -r '.[].namespaces[0].uuid'
00:14:31.298   22:41:31 sma.sma_discovery -- sma/discovery.sh@211 -- # [[ 625d712a-d587-4298-969c-4d5e0e737dbd == \6\2\5\d\7\1\2\a\-\d\5\8\7\-\4\2\9\8\-\9\6\9\c\-\4\d\5\e\0\e\7\3\7\d\b\d ]]
00:14:31.298   22:41:31 sma.sma_discovery -- sma/discovery.sh@214 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 4b56f4c2-4d3b-460d-b5cc-43381ec17344 8010
00:14:31.298   22:41:31 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:14:31.298   22:41:31 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:14:31.298   22:41:31 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:31.298    22:41:31 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 4b56f4c2-4d3b-460d-b5cc-43381ec17344 8010
00:14:31.298    22:41:31 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=4b56f4c2-4d3b-460d-b5cc-43381ec17344
00:14:31.298    22:41:31 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:14:31.298    22:41:31 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:14:31.298     22:41:31 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 4b56f4c2-4d3b-460d-b5cc-43381ec17344
00:14:31.298     22:41:31 sma.sma_discovery -- sma/common.sh@20 -- # python
00:14:31.298     22:41:31 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8010
00:14:31.298     22:41:31 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8010')
00:14:31.298     22:41:31 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:14:31.298     22:41:31 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:14:31.298     22:41:31 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:14:31.298     22:41:31 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:14:31.298     22:41:31 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 ))
00:14:31.298     22:41:31 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:14:31.298     22:41:31 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:14:31.556  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:31.556  I0000 00:00:1733866892.102052  150088 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:31.556  I0000 00:00:1733866892.103675  150088 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:31.556  {}
00:14:31.556    22:41:32 sma.sma_discovery -- sma/discovery.sh@217 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:14:31.556    22:41:32 sma.sma_discovery -- sma/discovery.sh@217 -- # jq -r '. | length'
00:14:31.815   22:41:32 sma.sma_discovery -- sma/discovery.sh@217 -- # [[ 2 -eq 2 ]]
00:14:31.815    22:41:32 sma.sma_discovery -- sma/discovery.sh@218 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:14:31.815    22:41:32 sma.sma_discovery -- sma/discovery.sh@218 -- # jq -r '.[].namespaces | length'
00:14:31.815   22:41:32 sma.sma_discovery -- sma/discovery.sh@218 -- # [[ 2 -eq 2 ]]
00:14:31.815   22:41:32 sma.sma_discovery -- sma/discovery.sh@219 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:14:31.815   22:41:32 sma.sma_discovery -- sma/discovery.sh@219 -- # jq -r '.[].namespaces[].uuid'
00:14:31.815   22:41:32 sma.sma_discovery -- sma/discovery.sh@219 -- # grep 625d712a-d587-4298-969c-4d5e0e737dbd
00:14:32.073  625d712a-d587-4298-969c-4d5e0e737dbd
00:14:32.073   22:41:32 sma.sma_discovery -- sma/discovery.sh@220 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:14:32.073   22:41:32 sma.sma_discovery -- sma/discovery.sh@220 -- # jq -r '.[].namespaces[].uuid'
00:14:32.073   22:41:32 sma.sma_discovery -- sma/discovery.sh@220 -- # grep 4b56f4c2-4d3b-460d-b5cc-43381ec17344
00:14:32.332  4b56f4c2-4d3b-460d-b5cc-43381ec17344
00:14:32.332   22:41:33 sma.sma_discovery -- sma/discovery.sh@223 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 625d712a-d587-4298-969c-4d5e0e737dbd
00:14:32.332   22:41:33 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:32.332    22:41:33 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 625d712a-d587-4298-969c-4d5e0e737dbd
00:14:32.332    22:41:33 sma.sma_discovery -- sma/common.sh@20 -- # python
00:14:32.590  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:32.590  I0000 00:00:1733866893.297499  150335 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:32.590  I0000 00:00:1733866893.299274  150335 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:32.590  {}
00:14:32.590    22:41:33 sma.sma_discovery -- sma/discovery.sh@227 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:14:32.590    22:41:33 sma.sma_discovery -- sma/discovery.sh@227 -- # jq -r '. | length'
00:14:32.849   22:41:33 sma.sma_discovery -- sma/discovery.sh@227 -- # [[ 1 -eq 1 ]]
00:14:32.849   22:41:33 sma.sma_discovery -- sma/discovery.sh@228 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:14:32.849   22:41:33 sma.sma_discovery -- sma/discovery.sh@228 -- # jq -r '.[].trid.trsvcid'
00:14:32.849   22:41:33 sma.sma_discovery -- sma/discovery.sh@228 -- # grep 8010
00:14:33.108  8010
00:14:33.108    22:41:33 sma.sma_discovery -- sma/discovery.sh@230 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:14:33.108    22:41:33 sma.sma_discovery -- sma/discovery.sh@230 -- # jq -r '.[].namespaces | length'
00:14:33.366   22:41:34 sma.sma_discovery -- sma/discovery.sh@230 -- # [[ 1 -eq 1 ]]
00:14:33.366    22:41:34 sma.sma_discovery -- sma/discovery.sh@231 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:14:33.366    22:41:34 sma.sma_discovery -- sma/discovery.sh@231 -- # jq -r '.[].namespaces[0].uuid'
00:14:33.625   22:41:34 sma.sma_discovery -- sma/discovery.sh@231 -- # [[ 4b56f4c2-4d3b-460d-b5cc-43381ec17344 == \4\b\5\6\f\4\c\2\-\4\d\3\b\-\4\6\0\d\-\b\5\c\c\-\4\3\3\8\1\e\c\1\7\3\4\4 ]]
00:14:33.625   22:41:34 sma.sma_discovery -- sma/discovery.sh@234 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 4b56f4c2-4d3b-460d-b5cc-43381ec17344
00:14:33.625   22:41:34 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:33.625    22:41:34 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 4b56f4c2-4d3b-460d-b5cc-43381ec17344
00:14:33.625    22:41:34 sma.sma_discovery -- sma/common.sh@20 -- # python
00:14:33.884  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:33.884  I0000 00:00:1733866894.490424  150574 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:33.884  I0000 00:00:1733866894.492108  150574 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:33.884  {}
00:14:33.884    22:41:34 sma.sma_discovery -- sma/discovery.sh@237 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:14:33.884    22:41:34 sma.sma_discovery -- sma/discovery.sh@237 -- # jq -r '. | length'
00:14:34.142   22:41:34 sma.sma_discovery -- sma/discovery.sh@237 -- # [[ 0 -eq 0 ]]
00:14:34.143    22:41:34 sma.sma_discovery -- sma/discovery.sh@238 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:14:34.143    22:41:34 sma.sma_discovery -- sma/discovery.sh@238 -- # jq -r '.[].namespaces | length'
00:14:34.401   22:41:34 sma.sma_discovery -- sma/discovery.sh@238 -- # [[ 0 -eq 0 ]]
00:14:34.401    22:41:35 sma.sma_discovery -- sma/discovery.sh@241 -- # uuidgen
00:14:34.401   22:41:35 sma.sma_discovery -- sma/discovery.sh@241 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 4f50dd1e-d72e-4fde-bad1-1f1359cbc5e6 8009
00:14:34.401   22:41:35 sma.sma_discovery -- common/autotest_common.sh@652 -- # local es=0
00:14:34.401   22:41:35 sma.sma_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 4f50dd1e-d72e-4fde-bad1-1f1359cbc5e6 8009
00:14:34.401   22:41:35 sma.sma_discovery -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:14:34.401   22:41:35 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:34.401    22:41:35 sma.sma_discovery -- common/autotest_common.sh@644 -- # type -t attach_volume
00:14:34.401   22:41:35 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:34.401   22:41:35 sma.sma_discovery -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 4f50dd1e-d72e-4fde-bad1-1f1359cbc5e6 8009
00:14:34.401   22:41:35 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:14:34.401   22:41:35 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:14:34.401   22:41:35 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:34.401    22:41:35 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 4f50dd1e-d72e-4fde-bad1-1f1359cbc5e6 8009
00:14:34.401    22:41:35 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=4f50dd1e-d72e-4fde-bad1-1f1359cbc5e6
00:14:34.401    22:41:35 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:14:34.401    22:41:35 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:14:34.401     22:41:35 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 4f50dd1e-d72e-4fde-bad1-1f1359cbc5e6
00:14:34.401     22:41:35 sma.sma_discovery -- sma/common.sh@20 -- # python
00:14:34.401     22:41:35 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8009
00:14:34.401     22:41:35 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8009')
00:14:34.401     22:41:35 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:14:34.401     22:41:35 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:14:34.401     22:41:35 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:14:34.401     22:41:35 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:14:34.402     22:41:35 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 ))
00:14:34.402     22:41:35 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:14:34.402     22:41:35 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:14:34.660  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:34.660  I0000 00:00:1733866895.329516  150808 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:34.660  I0000 00:00:1733866895.331241  150808 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:36.035  [2024-12-10 22:41:36.437478] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 4f50dd1e-d72e-4fde-bad1-1f1359cbc5e6
00:14:36.035  [2024-12-10 22:41:36.537709] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 4f50dd1e-d72e-4fde-bad1-1f1359cbc5e6
00:14:36.036  [2024-12-10 22:41:36.637945] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 4f50dd1e-d72e-4fde-bad1-1f1359cbc5e6
00:14:36.036  [2024-12-10 22:41:36.738176] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 4f50dd1e-d72e-4fde-bad1-1f1359cbc5e6
00:14:36.294  [2024-12-10 22:41:36.838407] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 4f50dd1e-d72e-4fde-bad1-1f1359cbc5e6
00:14:36.294  [2024-12-10 22:41:36.938640] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 4f50dd1e-d72e-4fde-bad1-1f1359cbc5e6
00:14:36.294  [2024-12-10 22:41:37.038873] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 4f50dd1e-d72e-4fde-bad1-1f1359cbc5e6
00:14:36.552  [2024-12-10 22:41:37.139106] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 4f50dd1e-d72e-4fde-bad1-1f1359cbc5e6
00:14:36.552  [2024-12-10 22:41:37.239336] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 4f50dd1e-d72e-4fde-bad1-1f1359cbc5e6
00:14:36.811  [2024-12-10 22:41:37.339569] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 4f50dd1e-d72e-4fde-bad1-1f1359cbc5e6
00:14:36.811  [2024-12-10 22:41:37.439804] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 4f50dd1e-d72e-4fde-bad1-1f1359cbc5e6
00:14:36.811  [2024-12-10 22:41:37.439826] bdev.c:8824:_bdev_open_async: *ERROR*: Timed out while waiting for bdev '4f50dd1e-d72e-4fde-bad1-1f1359cbc5e6' to appear
00:14:36.811  Traceback (most recent call last):
00:14:36.811    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:14:36.811      main(sys.argv[1:])
00:14:36.811    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:14:36.811      result = client.call(request['method'], request.get('params', {}))
00:14:36.811               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:14:36.811    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:14:36.811      response = func(request=json_format.ParseDict(params, input()))
00:14:36.811                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:14:36.811    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:14:36.811      return _end_unary_response_blocking(state, call, False, None)
00:14:36.811             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:14:36.811    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:14:36.811      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:14:36.811      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:14:36.811  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:14:36.811  	status = StatusCode.NOT_FOUND
00:14:36.811  	details = "Volume could not be found"
00:14:36.811  	debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:8080 {created_time:"2024-12-10T22:41:37.456965293+01:00", grpc_status:5, grpc_message:"Volume could not be found"}"
00:14:36.811  >
00:14:36.811   22:41:37 sma.sma_discovery -- common/autotest_common.sh@655 -- # es=1
00:14:36.811   22:41:37 sma.sma_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:14:36.811   22:41:37 sma.sma_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:14:36.811   22:41:37 sma.sma_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:14:36.811    22:41:37 sma.sma_discovery -- sma/discovery.sh@242 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:14:36.811    22:41:37 sma.sma_discovery -- sma/discovery.sh@242 -- # jq -r '. | length'
00:14:37.070   22:41:37 sma.sma_discovery -- sma/discovery.sh@242 -- # [[ 0 -eq 0 ]]
00:14:37.070    22:41:37 sma.sma_discovery -- sma/discovery.sh@243 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:14:37.070    22:41:37 sma.sma_discovery -- sma/discovery.sh@243 -- # jq -r '.[].namespaces | length'
00:14:37.328   22:41:37 sma.sma_discovery -- sma/discovery.sh@243 -- # [[ 0 -eq 0 ]]
00:14:37.328   22:41:37 sma.sma_discovery -- sma/discovery.sh@246 -- # volumes=($t1uuid $t2uuid)
00:14:37.328   22:41:37 sma.sma_discovery -- sma/discovery.sh@247 -- # for volume_id in "${volumes[@]}"
00:14:37.328   22:41:37 sma.sma_discovery -- sma/discovery.sh@248 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 625d712a-d587-4298-969c-4d5e0e737dbd 8009 8010
00:14:37.328   22:41:37 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:14:37.328   22:41:37 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:14:37.328   22:41:37 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:37.328    22:41:37 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 625d712a-d587-4298-969c-4d5e0e737dbd 8009 8010
00:14:37.328    22:41:37 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=625d712a-d587-4298-969c-4d5e0e737dbd
00:14:37.328    22:41:37 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:14:37.328    22:41:37 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:14:37.328     22:41:37 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 625d712a-d587-4298-969c-4d5e0e737dbd
00:14:37.328     22:41:37 sma.sma_discovery -- sma/common.sh@20 -- # python
00:14:37.328     22:41:37 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8009 8010
00:14:37.328     22:41:37 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8009' '8010')
00:14:37.328     22:41:37 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:14:37.328     22:41:37 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:14:37.328     22:41:37 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:14:37.328     22:41:37 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:14:37.328     22:41:37 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:14:37.328     22:41:37 sma.sma_discovery -- sma/discovery.sh@44 -- # echo ,
00:14:37.328     22:41:37 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:14:37.328     22:41:37 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:14:37.328     22:41:37 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:14:37.328     22:41:37 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:14:37.328     22:41:37 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:14:37.328     22:41:37 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:14:37.586  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:37.586  I0000 00:00:1733866898.214913  151265 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:37.586  I0000 00:00:1733866898.216649  151265 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:40.118  {}
00:14:40.118   22:41:40 sma.sma_discovery -- sma/discovery.sh@247 -- # for volume_id in "${volumes[@]}"
00:14:40.118   22:41:40 sma.sma_discovery -- sma/discovery.sh@248 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 4b56f4c2-4d3b-460d-b5cc-43381ec17344 8009 8010
00:14:40.118   22:41:40 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:14:40.118   22:41:40 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:14:40.118   22:41:40 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:40.118    22:41:40 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 4b56f4c2-4d3b-460d-b5cc-43381ec17344 8009 8010
00:14:40.118    22:41:40 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=4b56f4c2-4d3b-460d-b5cc-43381ec17344
00:14:40.118    22:41:40 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:14:40.118    22:41:40 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:14:40.118     22:41:40 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 4b56f4c2-4d3b-460d-b5cc-43381ec17344
00:14:40.118     22:41:40 sma.sma_discovery -- sma/common.sh@20 -- # python
00:14:40.118     22:41:40 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8009 8010
00:14:40.118     22:41:40 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8009' '8010')
00:14:40.118     22:41:40 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:14:40.118     22:41:40 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:14:40.118     22:41:40 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:14:40.118     22:41:40 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:14:40.118     22:41:40 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:14:40.118     22:41:40 sma.sma_discovery -- sma/discovery.sh@44 -- # echo ,
00:14:40.118     22:41:40 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:14:40.118     22:41:40 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:14:40.118     22:41:40 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:14:40.118     22:41:40 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:14:40.118     22:41:40 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:14:40.118     22:41:40 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:14:40.118  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:40.118  I0000 00:00:1733866900.770633  151817 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:40.118  I0000 00:00:1733866900.772190  151817 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:40.118  {}
00:14:40.118    22:41:40 sma.sma_discovery -- sma/discovery.sh@251 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:14:40.118    22:41:40 sma.sma_discovery -- sma/discovery.sh@251 -- # jq -r '. | length'
00:14:40.376   22:41:41 sma.sma_discovery -- sma/discovery.sh@251 -- # [[ 2 -eq 2 ]]
00:14:40.376   22:41:41 sma.sma_discovery -- sma/discovery.sh@252 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:14:40.376   22:41:41 sma.sma_discovery -- sma/discovery.sh@252 -- # jq -r '.[].trid.trsvcid'
00:14:40.376   22:41:41 sma.sma_discovery -- sma/discovery.sh@252 -- # grep 8009
00:14:40.634  8009
00:14:40.634   22:41:41 sma.sma_discovery -- sma/discovery.sh@253 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:14:40.634   22:41:41 sma.sma_discovery -- sma/discovery.sh@253 -- # jq -r '.[].trid.trsvcid'
00:14:40.634   22:41:41 sma.sma_discovery -- sma/discovery.sh@253 -- # grep 8010
00:14:40.892  8010
00:14:40.892   22:41:41 sma.sma_discovery -- sma/discovery.sh@254 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:14:40.892   22:41:41 sma.sma_discovery -- sma/discovery.sh@254 -- # jq -r '.[].namespaces[].uuid'
00:14:40.892   22:41:41 sma.sma_discovery -- sma/discovery.sh@254 -- # grep 625d712a-d587-4298-969c-4d5e0e737dbd
00:14:41.150  625d712a-d587-4298-969c-4d5e0e737dbd
00:14:41.150   22:41:41 sma.sma_discovery -- sma/discovery.sh@255 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:14:41.150   22:41:41 sma.sma_discovery -- sma/discovery.sh@255 -- # grep 4b56f4c2-4d3b-460d-b5cc-43381ec17344
00:14:41.150   22:41:41 sma.sma_discovery -- sma/discovery.sh@255 -- # jq -r '.[].namespaces[].uuid'
00:14:41.150  4b56f4c2-4d3b-460d-b5cc-43381ec17344
00:14:41.150   22:41:41 sma.sma_discovery -- sma/discovery.sh@258 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 625d712a-d587-4298-969c-4d5e0e737dbd
00:14:41.150   22:41:41 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:41.150    22:41:41 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 625d712a-d587-4298-969c-4d5e0e737dbd
00:14:41.150    22:41:41 sma.sma_discovery -- sma/common.sh@20 -- # python
00:14:41.717  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:41.717  I0000 00:00:1733866902.209965  152147 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:41.717  I0000 00:00:1733866902.211514  152147 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:41.717  {}
00:14:41.717    22:41:42 sma.sma_discovery -- sma/discovery.sh@260 -- # jq -r '. | length'
00:14:41.717    22:41:42 sma.sma_discovery -- sma/discovery.sh@260 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:14:41.717   22:41:42 sma.sma_discovery -- sma/discovery.sh@260 -- # [[ 2 -eq 2 ]]
00:14:41.717   22:41:42 sma.sma_discovery -- sma/discovery.sh@261 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:14:41.717   22:41:42 sma.sma_discovery -- sma/discovery.sh@261 -- # jq -r '.[].trid.trsvcid'
00:14:41.717   22:41:42 sma.sma_discovery -- sma/discovery.sh@261 -- # grep 8009
00:14:41.976  8009
00:14:41.976   22:41:42 sma.sma_discovery -- sma/discovery.sh@262 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:14:41.976   22:41:42 sma.sma_discovery -- sma/discovery.sh@262 -- # jq -r '.[].trid.trsvcid'
00:14:41.976   22:41:42 sma.sma_discovery -- sma/discovery.sh@262 -- # grep 8010
00:14:42.235  8010
00:14:42.235   22:41:42 sma.sma_discovery -- sma/discovery.sh@265 -- # NOT delete_device nvmf-tcp:nqn.2016-06.io.spdk:local0
00:14:42.235   22:41:42 sma.sma_discovery -- common/autotest_common.sh@652 -- # local es=0
00:14:42.235   22:41:42 sma.sma_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg delete_device nvmf-tcp:nqn.2016-06.io.spdk:local0
00:14:42.235   22:41:42 sma.sma_discovery -- common/autotest_common.sh@640 -- # local arg=delete_device
00:14:42.235   22:41:42 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:42.235    22:41:42 sma.sma_discovery -- common/autotest_common.sh@644 -- # type -t delete_device
00:14:42.235   22:41:42 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:42.235   22:41:42 sma.sma_discovery -- common/autotest_common.sh@655 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:local0
00:14:42.235   22:41:42 sma.sma_discovery -- sma/discovery.sh@95 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:42.493  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:42.493  I0000 00:00:1733866903.114122  152333 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:42.493  I0000 00:00:1733866903.115649  152333 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:42.493  Traceback (most recent call last):
00:14:42.493    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:14:42.493      main(sys.argv[1:])
00:14:42.493    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:14:42.493      result = client.call(request['method'], request.get('params', {}))
00:14:42.493               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:14:42.493    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:14:42.493      response = func(request=json_format.ParseDict(params, input()))
00:14:42.493                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:14:42.493    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:14:42.493      return _end_unary_response_blocking(state, call, False, None)
00:14:42.493             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:14:42.493    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:14:42.493      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:14:42.493      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:14:42.493  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:14:42.493  	status = StatusCode.FAILED_PRECONDITION
00:14:42.493  	details = "Device has attached volumes"
00:14:42.493  	debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:8080 {created_time:"2024-12-10T22:41:43.117694376+01:00", grpc_status:9, grpc_message:"Device has attached volumes"}"
00:14:42.493  >
00:14:42.493   22:41:43 sma.sma_discovery -- common/autotest_common.sh@655 -- # es=1
00:14:42.493   22:41:43 sma.sma_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:14:42.493   22:41:43 sma.sma_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:14:42.493   22:41:43 sma.sma_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:14:42.493    22:41:43 sma.sma_discovery -- sma/discovery.sh@267 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:14:42.493    22:41:43 sma.sma_discovery -- sma/discovery.sh@267 -- # jq -r '. | length'
00:14:42.752   22:41:43 sma.sma_discovery -- sma/discovery.sh@267 -- # [[ 2 -eq 2 ]]
00:14:42.752   22:41:43 sma.sma_discovery -- sma/discovery.sh@268 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:14:42.752   22:41:43 sma.sma_discovery -- sma/discovery.sh@268 -- # jq -r '.[].trid.trsvcid'
00:14:42.752   22:41:43 sma.sma_discovery -- sma/discovery.sh@268 -- # grep 8009
00:14:43.011  8009
00:14:43.011   22:41:43 sma.sma_discovery -- sma/discovery.sh@269 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:14:43.011   22:41:43 sma.sma_discovery -- sma/discovery.sh@269 -- # grep 8010
00:14:43.011   22:41:43 sma.sma_discovery -- sma/discovery.sh@269 -- # jq -r '.[].trid.trsvcid'
00:14:43.011  8010
00:14:43.011   22:41:43 sma.sma_discovery -- sma/discovery.sh@272 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 4b56f4c2-4d3b-460d-b5cc-43381ec17344
00:14:43.011   22:41:43 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:43.011    22:41:43 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 4b56f4c2-4d3b-460d-b5cc-43381ec17344
00:14:43.011    22:41:43 sma.sma_discovery -- sma/common.sh@20 -- # python
00:14:43.270  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:43.270  I0000 00:00:1733866904.003563  152427 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:43.270  I0000 00:00:1733866904.005163  152427 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:43.529  {}
00:14:43.529   22:41:44 sma.sma_discovery -- sma/discovery.sh@273 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:local0
00:14:43.529   22:41:44 sma.sma_discovery -- sma/discovery.sh@95 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:43.529  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:43.529  I0000 00:00:1733866904.273761  152589 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:43.529  I0000 00:00:1733866904.275205  152589 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:43.529  {}
00:14:43.790    22:41:44 sma.sma_discovery -- sma/discovery.sh@275 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:14:43.790    22:41:44 sma.sma_discovery -- sma/discovery.sh@275 -- # jq -r '. | length'
00:14:43.790   22:41:44 sma.sma_discovery -- sma/discovery.sh@275 -- # [[ 0 -eq 0 ]]
00:14:43.790   22:41:44 sma.sma_discovery -- sma/discovery.sh@276 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:14:43.790   22:41:44 sma.sma_discovery -- common/autotest_common.sh@652 -- # local es=0
00:14:43.790   22:41:44 sma.sma_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:14:43.790   22:41:44 sma.sma_discovery -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:14:43.790   22:41:44 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:43.790    22:41:44 sma.sma_discovery -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:14:43.790   22:41:44 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:43.790    22:41:44 sma.sma_discovery -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:14:43.790   22:41:44 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:43.790   22:41:44 sma.sma_discovery -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:14:43.790   22:41:44 sma.sma_discovery -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py ]]
00:14:43.790   22:41:44 sma.sma_discovery -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:14:44.048  [2024-12-10 22:41:44.702490] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:local0' does not exist
00:14:44.048  request:
00:14:44.048  {
00:14:44.048    "nqn": "nqn.2016-06.io.spdk:local0",
00:14:44.048    "method": "nvmf_get_subsystems",
00:14:44.048    "req_id": 1
00:14:44.048  }
00:14:44.048  Got JSON-RPC error response
00:14:44.048  response:
00:14:44.048  {
00:14:44.048    "code": -19,
00:14:44.048    "message": "No such device"
00:14:44.048  }
00:14:44.048   22:41:44 sma.sma_discovery -- common/autotest_common.sh@655 -- # es=1
00:14:44.048   22:41:44 sma.sma_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:14:44.048   22:41:44 sma.sma_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:14:44.048   22:41:44 sma.sma_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:14:44.048    22:41:44 sma.sma_discovery -- sma/discovery.sh@279 -- # create_device nqn.2016-06.io.spdk:local0 625d712a-d587-4298-969c-4d5e0e737dbd 8009
00:14:44.048    22:41:44 sma.sma_discovery -- sma/discovery.sh@279 -- # jq -r .handle
00:14:44.048    22:41:44 sma.sma_discovery -- sma/discovery.sh@69 -- # local nqn=nqn.2016-06.io.spdk:local0
00:14:44.048    22:41:44 sma.sma_discovery -- sma/discovery.sh@70 -- # local volume_id=625d712a-d587-4298-969c-4d5e0e737dbd
00:14:44.048    22:41:44 sma.sma_discovery -- sma/discovery.sh@71 -- # local volume=
00:14:44.048    22:41:44 sma.sma_discovery -- sma/discovery.sh@73 -- # shift
00:14:44.048    22:41:44 sma.sma_discovery -- sma/discovery.sh@74 -- # [[ -n 625d712a-d587-4298-969c-4d5e0e737dbd ]]
00:14:44.048     22:41:44 sma.sma_discovery -- sma/discovery.sh@75 -- # format_volume 625d712a-d587-4298-969c-4d5e0e737dbd 8009
00:14:44.048     22:41:44 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=625d712a-d587-4298-969c-4d5e0e737dbd
00:14:44.048     22:41:44 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:14:44.048     22:41:44 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:14:44.048      22:41:44 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 625d712a-d587-4298-969c-4d5e0e737dbd
00:14:44.048      22:41:44 sma.sma_discovery -- sma/common.sh@20 -- # python
00:14:44.048      22:41:44 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8009
00:14:44.048      22:41:44 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8009')
00:14:44.048      22:41:44 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:14:44.048      22:41:44 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:14:44.048      22:41:44 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:14:44.048      22:41:44 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:14:44.049      22:41:44 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 ))
00:14:44.049      22:41:44 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:14:44.049      22:41:44 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:14:44.049    22:41:44 sma.sma_discovery -- sma/discovery.sh@75 -- # volume='"volume": {
00:14:44.049  "volume_id": "Yl1xKtWHQpiWnE1eDnN9vQ==",
00:14:44.049  "nvmf": {
00:14:44.049  "hostnqn": "nqn.2016-06.io.spdk:host0",
00:14:44.049  "discovery": {
00:14:44.049  "discovery_endpoints": [
00:14:44.049  {
00:14:44.049  "trtype": "tcp",
00:14:44.049  "traddr": "127.0.0.1",
00:14:44.049  "trsvcid": "8009"
00:14:44.049  }
00:14:44.049  ]
00:14:44.049  }
00:14:44.049  }
00:14:44.049  },'
00:14:44.049    22:41:44 sma.sma_discovery -- sma/discovery.sh@78 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:44.307  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:44.307  I0000 00:00:1733866904.963924  152688 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:44.307  I0000 00:00:1733866904.965781  152688 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:45.683  [2024-12-10 22:41:46.082544] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 ***
00:14:45.683   22:41:46 sma.sma_discovery -- sma/discovery.sh@279 -- # device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:14:45.683    22:41:46 sma.sma_discovery -- sma/discovery.sh@282 -- # jq -r '. | length'
00:14:45.683    22:41:46 sma.sma_discovery -- sma/discovery.sh@282 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:14:45.683   22:41:46 sma.sma_discovery -- sma/discovery.sh@282 -- # [[ 1 -eq 1 ]]
00:14:45.683   22:41:46 sma.sma_discovery -- sma/discovery.sh@283 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:14:45.683   22:41:46 sma.sma_discovery -- sma/discovery.sh@283 -- # grep 8009
00:14:45.683   22:41:46 sma.sma_discovery -- sma/discovery.sh@283 -- # jq -r '.[].trid.trsvcid'
00:14:45.941  8009
00:14:45.942    22:41:46 sma.sma_discovery -- sma/discovery.sh@284 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:14:45.942    22:41:46 sma.sma_discovery -- sma/discovery.sh@284 -- # jq -r '.[].namespaces | length'
00:14:46.200   22:41:46 sma.sma_discovery -- sma/discovery.sh@284 -- # [[ 1 -eq 1 ]]
00:14:46.200    22:41:46 sma.sma_discovery -- sma/discovery.sh@285 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:14:46.200    22:41:46 sma.sma_discovery -- sma/discovery.sh@285 -- # jq -r '.[].namespaces[0].uuid'
00:14:46.458   22:41:47 sma.sma_discovery -- sma/discovery.sh@285 -- # [[ 625d712a-d587-4298-969c-4d5e0e737dbd == \6\2\5\d\7\1\2\a\-\d\5\8\7\-\4\2\9\8\-\9\6\9\c\-\4\d\5\e\0\e\7\3\7\d\b\d ]]
00:14:46.458   22:41:47 sma.sma_discovery -- sma/discovery.sh@288 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 625d712a-d587-4298-969c-4d5e0e737dbd
00:14:46.458   22:41:47 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:46.458    22:41:47 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 625d712a-d587-4298-969c-4d5e0e737dbd
00:14:46.458    22:41:47 sma.sma_discovery -- sma/common.sh@20 -- # python
00:14:46.717  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:46.717  I0000 00:00:1733866907.315327  153135 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:46.717  I0000 00:00:1733866907.317162  153135 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:46.717  {}
00:14:46.717    22:41:47 sma.sma_discovery -- sma/discovery.sh@290 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:14:46.717    22:41:47 sma.sma_discovery -- sma/discovery.sh@290 -- # jq -r '. | length'
00:14:46.975   22:41:47 sma.sma_discovery -- sma/discovery.sh@290 -- # [[ 0 -eq 0 ]]
00:14:46.975    22:41:47 sma.sma_discovery -- sma/discovery.sh@291 -- # jq -r '.[].namespaces | length'
00:14:46.975    22:41:47 sma.sma_discovery -- sma/discovery.sh@291 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:14:47.233   22:41:47 sma.sma_discovery -- sma/discovery.sh@291 -- # [[ 0 -eq 0 ]]
00:14:47.233   22:41:47 sma.sma_discovery -- sma/discovery.sh@294 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 4b56f4c2-4d3b-460d-b5cc-43381ec17344 8010 8011
00:14:47.233   22:41:47 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:14:47.233   22:41:47 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:14:47.233   22:41:47 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:47.233    22:41:47 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 4b56f4c2-4d3b-460d-b5cc-43381ec17344 8010 8011
00:14:47.233    22:41:47 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=4b56f4c2-4d3b-460d-b5cc-43381ec17344
00:14:47.233    22:41:47 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:14:47.233    22:41:47 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:14:47.233     22:41:47 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 4b56f4c2-4d3b-460d-b5cc-43381ec17344
00:14:47.233     22:41:47 sma.sma_discovery -- sma/common.sh@20 -- # python
00:14:47.233     22:41:47 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8010 8011
00:14:47.233     22:41:47 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8010' '8011')
00:14:47.233     22:41:47 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:14:47.233     22:41:47 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:14:47.233     22:41:47 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:14:47.233     22:41:47 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:14:47.233     22:41:47 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:14:47.233     22:41:47 sma.sma_discovery -- sma/discovery.sh@44 -- # echo ,
00:14:47.233     22:41:47 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:14:47.233     22:41:47 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:14:47.233     22:41:47 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:14:47.233     22:41:47 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:14:47.233     22:41:47 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:14:47.233     22:41:47 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:14:47.493  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:47.493  I0000 00:00:1733866908.064420  153363 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:47.493  I0000 00:00:1733866908.066108  153363 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:48.430  {}
00:14:48.689    22:41:49 sma.sma_discovery -- sma/discovery.sh@297 -- # jq -r '. | length'
00:14:48.689    22:41:49 sma.sma_discovery -- sma/discovery.sh@297 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:14:48.689   22:41:49 sma.sma_discovery -- sma/discovery.sh@297 -- # [[ 1 -eq 1 ]]
00:14:48.948    22:41:49 sma.sma_discovery -- sma/discovery.sh@298 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:14:48.948    22:41:49 sma.sma_discovery -- sma/discovery.sh@298 -- # jq -r '.[].namespaces | length'
00:14:48.948   22:41:49 sma.sma_discovery -- sma/discovery.sh@298 -- # [[ 1 -eq 1 ]]
00:14:48.948    22:41:49 sma.sma_discovery -- sma/discovery.sh@299 -- # jq -r '.[].namespaces[0].uuid'
00:14:48.948    22:41:49 sma.sma_discovery -- sma/discovery.sh@299 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:14:49.206   22:41:49 sma.sma_discovery -- sma/discovery.sh@299 -- # [[ 4b56f4c2-4d3b-460d-b5cc-43381ec17344 == \4\b\5\6\f\4\c\2\-\4\d\3\b\-\4\6\0\d\-\b\5\c\c\-\4\3\3\8\1\e\c\1\7\3\4\4 ]]
00:14:49.206   22:41:49 sma.sma_discovery -- sma/discovery.sh@302 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 661d563c-e122-4df4-b815-d375358c3b20 8011
00:14:49.206   22:41:49 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:14:49.206   22:41:49 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:14:49.206   22:41:49 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:49.206    22:41:49 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 661d563c-e122-4df4-b815-d375358c3b20 8011
00:14:49.206    22:41:49 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=661d563c-e122-4df4-b815-d375358c3b20
00:14:49.206    22:41:49 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:14:49.206    22:41:49 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:14:49.206     22:41:49 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 661d563c-e122-4df4-b815-d375358c3b20
00:14:49.206     22:41:49 sma.sma_discovery -- sma/common.sh@20 -- # python
00:14:49.465     22:41:50 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8011
00:14:49.465     22:41:50 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8011')
00:14:49.465     22:41:50 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:14:49.465     22:41:50 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:14:49.465     22:41:50 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:14:49.465     22:41:50 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:14:49.465     22:41:50 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 ))
00:14:49.465     22:41:50 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:14:49.465     22:41:50 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:14:49.724  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:49.724  I0000 00:00:1733866910.274248  153674 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:49.724  I0000 00:00:1733866910.276095  153674 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:49.724  {}
00:14:49.724    22:41:50 sma.sma_discovery -- sma/discovery.sh@305 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:14:49.724    22:41:50 sma.sma_discovery -- sma/discovery.sh@305 -- # jq -r '. | length'
00:14:49.983   22:41:50 sma.sma_discovery -- sma/discovery.sh@305 -- # [[ 1 -eq 1 ]]
00:14:49.983    22:41:50 sma.sma_discovery -- sma/discovery.sh@306 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:14:49.983    22:41:50 sma.sma_discovery -- sma/discovery.sh@306 -- # jq -r '.[].namespaces | length'
00:14:50.241   22:41:50 sma.sma_discovery -- sma/discovery.sh@306 -- # [[ 2 -eq 2 ]]
00:14:50.241   22:41:50 sma.sma_discovery -- sma/discovery.sh@307 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:14:50.241   22:41:50 sma.sma_discovery -- sma/discovery.sh@307 -- # jq -r '.[].namespaces[].uuid'
00:14:50.241   22:41:50 sma.sma_discovery -- sma/discovery.sh@307 -- # grep 4b56f4c2-4d3b-460d-b5cc-43381ec17344
00:14:50.241  4b56f4c2-4d3b-460d-b5cc-43381ec17344
00:14:50.241   22:41:50 sma.sma_discovery -- sma/discovery.sh@308 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:14:50.241   22:41:50 sma.sma_discovery -- sma/discovery.sh@308 -- # jq -r '.[].namespaces[].uuid'
00:14:50.241   22:41:50 sma.sma_discovery -- sma/discovery.sh@308 -- # grep 661d563c-e122-4df4-b815-d375358c3b20
00:14:50.500  661d563c-e122-4df4-b815-d375358c3b20
00:14:50.500   22:41:51 sma.sma_discovery -- sma/discovery.sh@311 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 625d712a-d587-4298-969c-4d5e0e737dbd
00:14:50.500   22:41:51 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:50.500    22:41:51 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 625d712a-d587-4298-969c-4d5e0e737dbd
00:14:50.500    22:41:51 sma.sma_discovery -- sma/common.sh@20 -- # python
00:14:50.758  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:50.758  I0000 00:00:1733866911.456439  154040 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:50.758  I0000 00:00:1733866911.458156  154040 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:50.758  [2024-12-10 22:41:51.462115] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 625d712a-d587-4298-969c-4d5e0e737dbd
00:14:50.758  {}
00:14:50.758   22:41:51 sma.sma_discovery -- sma/discovery.sh@312 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 4b56f4c2-4d3b-460d-b5cc-43381ec17344
00:14:50.758   22:41:51 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:50.758    22:41:51 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 4b56f4c2-4d3b-460d-b5cc-43381ec17344
00:14:50.758    22:41:51 sma.sma_discovery -- sma/common.sh@20 -- # python
00:14:51.017  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:51.017  I0000 00:00:1733866911.717827  154089 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:51.017  I0000 00:00:1733866911.719289  154089 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:51.017  {}
00:14:51.017   22:41:51 sma.sma_discovery -- sma/discovery.sh@313 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 661d563c-e122-4df4-b815-d375358c3b20
00:14:51.017   22:41:51 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:51.017    22:41:51 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 661d563c-e122-4df4-b815-d375358c3b20
00:14:51.017    22:41:51 sma.sma_discovery -- sma/common.sh@20 -- # python
00:14:51.276  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:51.276  I0000 00:00:1733866911.983137  154112 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:51.276  I0000 00:00:1733866911.984544  154112 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:51.276  {}
00:14:51.276   22:41:52 sma.sma_discovery -- sma/discovery.sh@314 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:local0
00:14:51.276   22:41:52 sma.sma_discovery -- sma/discovery.sh@95 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:51.534  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:51.534  I0000 00:00:1733866912.231732  154139 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:51.534  I0000 00:00:1733866912.233270  154139 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:51.534  {}
00:14:51.535    22:41:52 sma.sma_discovery -- sma/discovery.sh@315 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:14:51.535    22:41:52 sma.sma_discovery -- sma/discovery.sh@315 -- # jq -r '. | length'
00:14:51.793   22:41:52 sma.sma_discovery -- sma/discovery.sh@315 -- # [[ 0 -eq 0 ]]
00:14:51.793    22:41:52 sma.sma_discovery -- sma/discovery.sh@317 -- # create_device nqn.2016-06.io.spdk:local0
00:14:51.793    22:41:52 sma.sma_discovery -- sma/discovery.sh@317 -- # jq -r .handle
00:14:51.793    22:41:52 sma.sma_discovery -- sma/discovery.sh@69 -- # local nqn=nqn.2016-06.io.spdk:local0
00:14:51.793    22:41:52 sma.sma_discovery -- sma/discovery.sh@70 -- # local volume_id=
00:14:51.793    22:41:52 sma.sma_discovery -- sma/discovery.sh@71 -- # local volume=
00:14:51.793    22:41:52 sma.sma_discovery -- sma/discovery.sh@73 -- # shift
00:14:51.793    22:41:52 sma.sma_discovery -- sma/discovery.sh@74 -- # [[ -n '' ]]
00:14:51.793    22:41:52 sma.sma_discovery -- sma/discovery.sh@78 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:52.052  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:52.052  I0000 00:00:1733866912.686716  154367 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:52.052  I0000 00:00:1733866912.688429  154367 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:52.052  [2024-12-10 22:41:52.710049] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 ***
00:14:52.052   22:41:52 sma.sma_discovery -- sma/discovery.sh@317 -- # device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:14:52.052   22:41:52 sma.sma_discovery -- sma/discovery.sh@320 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:52.052    22:41:52 sma.sma_discovery -- sma/discovery.sh@320 -- # uuid2base64 625d712a-d587-4298-969c-4d5e0e737dbd
00:14:52.052    22:41:52 sma.sma_discovery -- sma/common.sh@20 -- # python
00:14:52.310  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:52.310  I0000 00:00:1733866912.963566  154388 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:52.310  I0000 00:00:1733866912.965109  154388 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:53.686  {}
00:14:53.686    22:41:54 sma.sma_discovery -- sma/discovery.sh@345 -- # jq -r '. | length'
00:14:53.686    22:41:54 sma.sma_discovery -- sma/discovery.sh@345 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:14:53.686   22:41:54 sma.sma_discovery -- sma/discovery.sh@345 -- # [[ 1 -eq 1 ]]
00:14:53.686   22:41:54 sma.sma_discovery -- sma/discovery.sh@346 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:14:53.686   22:41:54 sma.sma_discovery -- sma/discovery.sh@346 -- # jq -r '.[].trid.trsvcid'
00:14:53.686   22:41:54 sma.sma_discovery -- sma/discovery.sh@346 -- # grep 8009
00:14:53.944  8009
00:14:53.945    22:41:54 sma.sma_discovery -- sma/discovery.sh@347 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:14:53.945    22:41:54 sma.sma_discovery -- sma/discovery.sh@347 -- # jq -r '.[].namespaces | length'
00:14:54.203   22:41:54 sma.sma_discovery -- sma/discovery.sh@347 -- # [[ 1 -eq 1 ]]
00:14:54.203    22:41:54 sma.sma_discovery -- sma/discovery.sh@348 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:14:54.203    22:41:54 sma.sma_discovery -- sma/discovery.sh@348 -- # jq -r '.[].namespaces[0].uuid'
00:14:54.462   22:41:55 sma.sma_discovery -- sma/discovery.sh@348 -- # [[ 625d712a-d587-4298-969c-4d5e0e737dbd == \6\2\5\d\7\1\2\a\-\d\5\8\7\-\4\2\9\8\-\9\6\9\c\-\4\d\5\e\0\e\7\3\7\d\b\d ]]
00:14:54.462   22:41:55 sma.sma_discovery -- sma/discovery.sh@351 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:54.462    22:41:55 sma.sma_discovery -- sma/discovery.sh@351 -- # uuid2base64 4b56f4c2-4d3b-460d-b5cc-43381ec17344
00:14:54.462    22:41:55 sma.sma_discovery -- sma/common.sh@20 -- # python
00:14:54.462   22:41:55 sma.sma_discovery -- common/autotest_common.sh@652 -- # local es=0
00:14:54.462   22:41:55 sma.sma_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:54.462   22:41:55 sma.sma_discovery -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:54.462   22:41:55 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:54.462    22:41:55 sma.sma_discovery -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:54.462   22:41:55 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:54.462    22:41:55 sma.sma_discovery -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:54.462   22:41:55 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:54.462   22:41:55 sma.sma_discovery -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:54.462   22:41:55 sma.sma_discovery -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:14:54.462   22:41:55 sma.sma_discovery -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:54.462  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:54.462  I0000 00:00:1733866915.225926  154838 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:54.462  I0000 00:00:1733866915.227450  154838 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:55.838  Traceback (most recent call last):
00:14:55.838    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:14:55.838      main(sys.argv[1:])
00:14:55.838    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:14:55.838      result = client.call(request['method'], request.get('params', {}))
00:14:55.838               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:14:55.838    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:14:55.838      response = func(request=json_format.ParseDict(params, input()))
00:14:55.838                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:14:55.838    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:14:55.838      return _end_unary_response_blocking(state, call, False, None)
00:14:55.838             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:14:55.838    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:14:55.838      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:14:55.838      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:14:55.838  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:14:55.838  	status = StatusCode.INVALID_ARGUMENT
00:14:55.838  	details = "Unexpected subsystem NQN"
00:14:55.838  	debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:8080 {grpc_message:"Unexpected subsystem NQN", grpc_status:3, created_time:"2024-12-10T22:41:56.338895133+01:00"}"
00:14:55.838  >
00:14:55.838   22:41:56 sma.sma_discovery -- common/autotest_common.sh@655 -- # es=1
00:14:55.838   22:41:56 sma.sma_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:14:55.838   22:41:56 sma.sma_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:14:55.838   22:41:56 sma.sma_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:14:55.838    22:41:56 sma.sma_discovery -- sma/discovery.sh@377 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:14:55.838    22:41:56 sma.sma_discovery -- sma/discovery.sh@377 -- # jq -r '. | length'
00:14:55.838   22:41:56 sma.sma_discovery -- sma/discovery.sh@377 -- # [[ 1 -eq 1 ]]
00:14:55.838   22:41:56 sma.sma_discovery -- sma/discovery.sh@378 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:14:55.838   22:41:56 sma.sma_discovery -- sma/discovery.sh@378 -- # jq -r '.[].trid.trsvcid'
00:14:55.838   22:41:56 sma.sma_discovery -- sma/discovery.sh@378 -- # grep 8009
00:14:56.097  8009
00:14:56.097    22:41:56 sma.sma_discovery -- sma/discovery.sh@379 -- # jq -r '.[].namespaces | length'
00:14:56.097    22:41:56 sma.sma_discovery -- sma/discovery.sh@379 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:14:56.355   22:41:57 sma.sma_discovery -- sma/discovery.sh@379 -- # [[ 1 -eq 1 ]]
00:14:56.355    22:41:57 sma.sma_discovery -- sma/discovery.sh@380 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:14:56.355    22:41:57 sma.sma_discovery -- sma/discovery.sh@380 -- # jq -r '.[].namespaces[0].uuid'
00:14:56.614   22:41:57 sma.sma_discovery -- sma/discovery.sh@380 -- # [[ 625d712a-d587-4298-969c-4d5e0e737dbd == \6\2\5\d\7\1\2\a\-\d\5\8\7\-\4\2\9\8\-\9\6\9\c\-\4\d\5\e\0\e\7\3\7\d\b\d ]]
00:14:56.614   22:41:57 sma.sma_discovery -- sma/discovery.sh@383 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:56.614    22:41:57 sma.sma_discovery -- sma/discovery.sh@383 -- # uuid2base64 4b56f4c2-4d3b-460d-b5cc-43381ec17344
00:14:56.614    22:41:57 sma.sma_discovery -- sma/common.sh@20 -- # python
00:14:56.614   22:41:57 sma.sma_discovery -- common/autotest_common.sh@652 -- # local es=0
00:14:56.614   22:41:57 sma.sma_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:56.614   22:41:57 sma.sma_discovery -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:56.614   22:41:57 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:56.614    22:41:57 sma.sma_discovery -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:56.614   22:41:57 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:56.614    22:41:57 sma.sma_discovery -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:56.614   22:41:57 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:56.614   22:41:57 sma.sma_discovery -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:56.614   22:41:57 sma.sma_discovery -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:14:56.614   22:41:57 sma.sma_discovery -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:56.873  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:56.873  I0000 00:00:1733866917.501312  155291 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:56.873  I0000 00:00:1733866917.503117  155291 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:02.142  [2024-12-10 22:42:02.526178] bdev_nvme.c:7609:discovery_poller: *ERROR*: Discovery[127.0.0.1:8010] timed out while attaching NVM ctrlrs
00:15:02.142  Traceback (most recent call last):
00:15:02.142    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:15:02.142      main(sys.argv[1:])
00:15:02.142    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:15:02.142      result = client.call(request['method'], request.get('params', {}))
00:15:02.142               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:15:02.142    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:15:02.142      response = func(request=json_format.ParseDict(params, input()))
00:15:02.142                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:15:02.142    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:15:02.142      return _end_unary_response_blocking(state, call, False, None)
00:15:02.142             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:15:02.142    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:15:02.142      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:15:02.142      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:15:02.142  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:15:02.142  	status = StatusCode.INTERNAL
00:15:02.142  	details = "Failed to start discovery"
00:15:02.142  	debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:8080 {grpc_message:"Failed to start discovery", grpc_status:13, created_time:"2024-12-10T22:42:02.530479372+01:00"}"
00:15:02.142  >
00:15:02.142   22:42:02 sma.sma_discovery -- common/autotest_common.sh@655 -- # es=1
00:15:02.142   22:42:02 sma.sma_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:15:02.142   22:42:02 sma.sma_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:15:02.142   22:42:02 sma.sma_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:15:02.142    22:42:02 sma.sma_discovery -- sma/discovery.sh@408 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:02.142    22:42:02 sma.sma_discovery -- sma/discovery.sh@408 -- # jq -r '. | length'
00:15:02.142   22:42:02 sma.sma_discovery -- sma/discovery.sh@408 -- # [[ 1 -eq 1 ]]
00:15:02.142   22:42:02 sma.sma_discovery -- sma/discovery.sh@409 -- # jq -r '.[].trid.trsvcid'
00:15:02.142   22:42:02 sma.sma_discovery -- sma/discovery.sh@409 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:02.142   22:42:02 sma.sma_discovery -- sma/discovery.sh@409 -- # grep 8009
00:15:02.401  8009
00:15:02.401    22:42:03 sma.sma_discovery -- sma/discovery.sh@410 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:02.401    22:42:03 sma.sma_discovery -- sma/discovery.sh@410 -- # jq -r '.[].namespaces | length'
00:15:02.659   22:42:03 sma.sma_discovery -- sma/discovery.sh@410 -- # [[ 1 -eq 1 ]]
00:15:02.659    22:42:03 sma.sma_discovery -- sma/discovery.sh@411 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:02.659    22:42:03 sma.sma_discovery -- sma/discovery.sh@411 -- # jq -r '.[].namespaces[0].uuid'
00:15:02.918   22:42:03 sma.sma_discovery -- sma/discovery.sh@411 -- # [[ 625d712a-d587-4298-969c-4d5e0e737dbd == \6\2\5\d\7\1\2\a\-\d\5\8\7\-\4\2\9\8\-\9\6\9\c\-\4\d\5\e\0\e\7\3\7\d\b\d ]]
00:15:02.918    22:42:03 sma.sma_discovery -- sma/discovery.sh@414 -- # uuidgen
00:15:02.918   22:42:03 sma.sma_discovery -- sma/discovery.sh@414 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 8cba7ac5-e427-405b-9f33-15a1425ed471 8008
00:15:02.918   22:42:03 sma.sma_discovery -- common/autotest_common.sh@652 -- # local es=0
00:15:02.918   22:42:03 sma.sma_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 8cba7ac5-e427-405b-9f33-15a1425ed471 8008
00:15:02.918   22:42:03 sma.sma_discovery -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:15:02.918   22:42:03 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:02.918    22:42:03 sma.sma_discovery -- common/autotest_common.sh@644 -- # type -t attach_volume
00:15:02.918   22:42:03 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:02.918   22:42:03 sma.sma_discovery -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 8cba7ac5-e427-405b-9f33-15a1425ed471 8008
00:15:02.918   22:42:03 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:15:02.918   22:42:03 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:15:02.918   22:42:03 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:02.918    22:42:03 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 8cba7ac5-e427-405b-9f33-15a1425ed471 8008
00:15:02.918    22:42:03 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=8cba7ac5-e427-405b-9f33-15a1425ed471
00:15:02.918    22:42:03 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:15:02.918    22:42:03 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:15:02.918     22:42:03 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 8cba7ac5-e427-405b-9f33-15a1425ed471
00:15:02.918     22:42:03 sma.sma_discovery -- sma/common.sh@20 -- # python
00:15:02.918     22:42:03 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8008
00:15:02.918     22:42:03 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8008')
00:15:02.918     22:42:03 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:15:02.918     22:42:03 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:15:02.918     22:42:03 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:15:02.918     22:42:03 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:15:02.918     22:42:03 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 ))
00:15:02.918     22:42:03 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:15:02.918     22:42:03 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:15:03.177  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:03.177  I0000 00:00:1733866923.838955  156479 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:03.177  I0000 00:00:1733866923.840545  156479 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:04.112  [2024-12-10 22:42:04.856472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:15:04.112  [2024-12-10 22:42:04.856537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500024e080 with addr=127.0.0.1, port=8008
00:15:04.112  [2024-12-10 22:42:04.856598] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:15:04.112  [2024-12-10 22:42:04.856614] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:15:04.112  [2024-12-10 22:42:04.856627] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[127.0.0.1:8008] could not start discovery connect
00:15:05.488  [2024-12-10 22:42:05.858740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:15:05.488  [2024-12-10 22:42:05.858810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500024e300 with addr=127.0.0.1, port=8008
00:15:05.488  [2024-12-10 22:42:05.858860] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:15:05.489  [2024-12-10 22:42:05.858873] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:15:05.489  [2024-12-10 22:42:05.858885] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[127.0.0.1:8008] could not start discovery connect
00:15:06.424  [2024-12-10 22:42:06.861085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:15:06.424  [2024-12-10 22:42:06.861137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500024e580 with addr=127.0.0.1, port=8008
00:15:06.424  [2024-12-10 22:42:06.861188] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:15:06.424  [2024-12-10 22:42:06.861201] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:15:06.424  [2024-12-10 22:42:06.861213] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[127.0.0.1:8008] could not start discovery connect
00:15:07.358  [2024-12-10 22:42:07.863385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:15:07.358  [2024-12-10 22:42:07.863433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500024e800 with addr=127.0.0.1, port=8008
00:15:07.358  [2024-12-10 22:42:07.863481] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:15:07.358  [2024-12-10 22:42:07.863494] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:15:07.358  [2024-12-10 22:42:07.863506] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[127.0.0.1:8008] could not start discovery connect
00:15:08.295  [2024-12-10 22:42:08.865567] bdev_nvme.c:7559:discovery_poller: *ERROR*: Discovery[127.0.0.1:8008] timed out while attaching discovery ctrlr
00:15:08.295  Traceback (most recent call last):
00:15:08.295    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:15:08.295      main(sys.argv[1:])
00:15:08.295    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:15:08.295      result = client.call(request['method'], request.get('params', {}))
00:15:08.295               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:15:08.295    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:15:08.295      response = func(request=json_format.ParseDict(params, input()))
00:15:08.295                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:15:08.295    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:15:08.295      return _end_unary_response_blocking(state, call, False, None)
00:15:08.295             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:15:08.295    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:15:08.295      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:15:08.295      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:15:08.295  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:15:08.295  	status = StatusCode.INTERNAL
00:15:08.295  	details = "Failed to start discovery"
00:15:08.295  	debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:8080 {created_time:"2024-12-10T22:42:08.869942899+01:00", grpc_status:13, grpc_message:"Failed to start discovery"}"
00:15:08.295  >
00:15:08.295   22:42:08 sma.sma_discovery -- common/autotest_common.sh@655 -- # es=1
00:15:08.295   22:42:08 sma.sma_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:15:08.295   22:42:08 sma.sma_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:15:08.295   22:42:08 sma.sma_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:15:08.295    22:42:08 sma.sma_discovery -- sma/discovery.sh@415 -- # jq -r '. | length'
00:15:08.295    22:42:08 sma.sma_discovery -- sma/discovery.sh@415 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:08.554   22:42:09 sma.sma_discovery -- sma/discovery.sh@415 -- # [[ 1 -eq 1 ]]
00:15:08.554   22:42:09 sma.sma_discovery -- sma/discovery.sh@416 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:08.554   22:42:09 sma.sma_discovery -- sma/discovery.sh@416 -- # jq -r '.[].trid.trsvcid'
00:15:08.554   22:42:09 sma.sma_discovery -- sma/discovery.sh@416 -- # grep 8009
00:15:08.813  8009
00:15:08.813   22:42:09 sma.sma_discovery -- sma/discovery.sh@420 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock1 nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:node1 1
00:15:08.813   22:42:09 sma.sma_discovery -- sma/discovery.sh@422 -- # sleep 2
00:15:09.380  WARNING:spdk.sma.volume.volume:Found disconnected volume: 625d712a-d587-4298-969c-4d5e0e737dbd
00:15:11.282    22:42:11 sma.sma_discovery -- sma/discovery.sh@423 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:11.282    22:42:11 sma.sma_discovery -- sma/discovery.sh@423 -- # jq -r '. | length'
00:15:11.282   22:42:11 sma.sma_discovery -- sma/discovery.sh@423 -- # [[ 0 -eq 0 ]]
00:15:11.282   22:42:11 sma.sma_discovery -- sma/discovery.sh@424 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock1 nvmf_subsystem_add_ns nqn.2016-06.io.spdk:node1 625d712a-d587-4298-969c-4d5e0e737dbd
00:15:11.540   22:42:12 sma.sma_discovery -- sma/discovery.sh@428 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 4b56f4c2-4d3b-460d-b5cc-43381ec17344 8010
00:15:11.540   22:42:12 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:15:11.540   22:42:12 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:15:11.540   22:42:12 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:11.541    22:42:12 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 4b56f4c2-4d3b-460d-b5cc-43381ec17344 8010
00:15:11.541    22:42:12 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=4b56f4c2-4d3b-460d-b5cc-43381ec17344
00:15:11.541    22:42:12 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:15:11.541    22:42:12 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:15:11.541     22:42:12 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 4b56f4c2-4d3b-460d-b5cc-43381ec17344
00:15:11.541     22:42:12 sma.sma_discovery -- sma/common.sh@20 -- # python
00:15:11.541     22:42:12 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8010
00:15:11.541     22:42:12 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8010')
00:15:11.541     22:42:12 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:15:11.541     22:42:12 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:15:11.541     22:42:12 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:15:11.541     22:42:12 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:15:11.541     22:42:12 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 ))
00:15:11.541     22:42:12 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:15:11.541     22:42:12 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:15:11.799  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:11.799  I0000 00:00:1733866932.355401  158350 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:11.799  I0000 00:00:1733866932.357157  158350 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:12.734  {}
00:15:12.992   22:42:13 sma.sma_discovery -- sma/discovery.sh@429 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 661d563c-e122-4df4-b815-d375358c3b20 8010
00:15:12.992   22:42:13 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:15:12.992   22:42:13 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:15:12.992   22:42:13 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:12.992    22:42:13 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 661d563c-e122-4df4-b815-d375358c3b20 8010
00:15:12.992    22:42:13 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=661d563c-e122-4df4-b815-d375358c3b20
00:15:12.992    22:42:13 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:15:12.992    22:42:13 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:15:12.992     22:42:13 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 661d563c-e122-4df4-b815-d375358c3b20
00:15:12.992     22:42:13 sma.sma_discovery -- sma/common.sh@20 -- # python
00:15:12.992     22:42:13 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8010
00:15:12.992     22:42:13 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8010')
00:15:12.992     22:42:13 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:15:12.992     22:42:13 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:15:12.992     22:42:13 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:15:12.992     22:42:13 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:15:12.992     22:42:13 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 ))
00:15:12.992     22:42:13 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:15:12.992     22:42:13 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:15:13.251  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:13.251  I0000 00:00:1733866933.874795  158592 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:13.251  I0000 00:00:1733866933.876564  158592 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:13.251  {}
00:15:13.251    22:42:13 sma.sma_discovery -- sma/discovery.sh@430 -- # jq -r '.[].namespaces | length'
00:15:13.251    22:42:13 sma.sma_discovery -- sma/discovery.sh@430 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:13.509   22:42:14 sma.sma_discovery -- sma/discovery.sh@430 -- # [[ 2 -eq 2 ]]
00:15:13.509    22:42:14 sma.sma_discovery -- sma/discovery.sh@431 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:13.509    22:42:14 sma.sma_discovery -- sma/discovery.sh@431 -- # jq -r '. | length'
00:15:13.769   22:42:14 sma.sma_discovery -- sma/discovery.sh@431 -- # [[ 1 -eq 1 ]]
00:15:13.769   22:42:14 sma.sma_discovery -- sma/discovery.sh@432 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock2 nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:node2 2
00:15:14.028   22:42:14 sma.sma_discovery -- sma/discovery.sh@434 -- # sleep 2
00:15:14.961  WARNING:spdk.sma.volume.volume:Found disconnected volume: 661d563c-e122-4df4-b815-d375358c3b20
00:15:15.896    22:42:16 sma.sma_discovery -- sma/discovery.sh@436 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:15.896    22:42:16 sma.sma_discovery -- sma/discovery.sh@436 -- # jq -r '.[].namespaces | length'
00:15:16.153   22:42:16 sma.sma_discovery -- sma/discovery.sh@436 -- # [[ 1 -eq 1 ]]
00:15:16.153    22:42:16 sma.sma_discovery -- sma/discovery.sh@437 -- # jq -r '. | length'
00:15:16.153    22:42:16 sma.sma_discovery -- sma/discovery.sh@437 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:16.412   22:42:17 sma.sma_discovery -- sma/discovery.sh@437 -- # [[ 1 -eq 1 ]]
00:15:16.412   22:42:17 sma.sma_discovery -- sma/discovery.sh@438 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock2 nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:node2 1
00:15:16.669   22:42:17 sma.sma_discovery -- sma/discovery.sh@440 -- # sleep 2
00:15:16.927  WARNING:spdk.sma.volume.volume:Found disconnected volume: 4b56f4c2-4d3b-460d-b5cc-43381ec17344
00:15:18.828    22:42:19 sma.sma_discovery -- sma/discovery.sh@442 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:18.828    22:42:19 sma.sma_discovery -- sma/discovery.sh@442 -- # jq -r '.[].namespaces | length'
00:15:18.828   22:42:19 sma.sma_discovery -- sma/discovery.sh@442 -- # [[ 0 -eq 0 ]]
00:15:18.828    22:42:19 sma.sma_discovery -- sma/discovery.sh@443 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:15:18.828    22:42:19 sma.sma_discovery -- sma/discovery.sh@443 -- # jq -r '. | length'
00:15:19.086   22:42:19 sma.sma_discovery -- sma/discovery.sh@443 -- # [[ 0 -eq 0 ]]
00:15:19.086   22:42:19 sma.sma_discovery -- sma/discovery.sh@444 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock2 nvmf_subsystem_add_ns nqn.2016-06.io.spdk:node2 4b56f4c2-4d3b-460d-b5cc-43381ec17344
00:15:19.344   22:42:19 sma.sma_discovery -- sma/discovery.sh@445 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock2 nvmf_subsystem_add_ns nqn.2016-06.io.spdk:node2 661d563c-e122-4df4-b815-d375358c3b20
00:15:19.602   22:42:20 sma.sma_discovery -- sma/discovery.sh@447 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:local0
00:15:19.602   22:42:20 sma.sma_discovery -- sma/discovery.sh@95 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:19.860  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:19.860  I0000 00:00:1733866940.403064  159870 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:19.860  I0000 00:00:1733866940.404966  159870 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:19.860  {}
00:15:19.860   22:42:20 sma.sma_discovery -- sma/discovery.sh@449 -- # cleanup
00:15:19.860   22:42:20 sma.sma_discovery -- sma/discovery.sh@27 -- # killprocess 148934
00:15:19.860   22:42:20 sma.sma_discovery -- common/autotest_common.sh@954 -- # '[' -z 148934 ']'
00:15:19.860   22:42:20 sma.sma_discovery -- common/autotest_common.sh@958 -- # kill -0 148934
00:15:19.860    22:42:20 sma.sma_discovery -- common/autotest_common.sh@959 -- # uname
00:15:19.860   22:42:20 sma.sma_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:19.860    22:42:20 sma.sma_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 148934
00:15:19.860   22:42:20 sma.sma_discovery -- common/autotest_common.sh@960 -- # process_name=python3
00:15:19.860   22:42:20 sma.sma_discovery -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:15:19.860   22:42:20 sma.sma_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 148934'
00:15:19.860  killing process with pid 148934
00:15:19.860   22:42:20 sma.sma_discovery -- common/autotest_common.sh@973 -- # kill 148934
00:15:19.860   22:42:20 sma.sma_discovery -- common/autotest_common.sh@978 -- # wait 148934
00:15:19.860   22:42:20 sma.sma_discovery -- sma/discovery.sh@28 -- # killprocess 148933
00:15:19.860   22:42:20 sma.sma_discovery -- common/autotest_common.sh@954 -- # '[' -z 148933 ']'
00:15:19.860   22:42:20 sma.sma_discovery -- common/autotest_common.sh@958 -- # kill -0 148933
00:15:19.860    22:42:20 sma.sma_discovery -- common/autotest_common.sh@959 -- # uname
00:15:19.860   22:42:20 sma.sma_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:19.860    22:42:20 sma.sma_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 148933
00:15:19.860   22:42:20 sma.sma_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:15:19.860   22:42:20 sma.sma_discovery -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:15:19.860   22:42:20 sma.sma_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 148933'
00:15:19.860  killing process with pid 148933
00:15:19.860   22:42:20 sma.sma_discovery -- common/autotest_common.sh@973 -- # kill 148933
00:15:19.860   22:42:20 sma.sma_discovery -- common/autotest_common.sh@978 -- # wait 148933
00:15:21.763   22:42:22 sma.sma_discovery -- sma/discovery.sh@29 -- # killprocess 148931
00:15:21.763   22:42:22 sma.sma_discovery -- common/autotest_common.sh@954 -- # '[' -z 148931 ']'
00:15:21.763   22:42:22 sma.sma_discovery -- common/autotest_common.sh@958 -- # kill -0 148931
00:15:21.763    22:42:22 sma.sma_discovery -- common/autotest_common.sh@959 -- # uname
00:15:21.764   22:42:22 sma.sma_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:21.764    22:42:22 sma.sma_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 148931
00:15:21.764   22:42:22 sma.sma_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:15:21.764   22:42:22 sma.sma_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:15:21.764   22:42:22 sma.sma_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 148931'
00:15:21.764  killing process with pid 148931
00:15:21.764   22:42:22 sma.sma_discovery -- common/autotest_common.sh@973 -- # kill 148931
00:15:21.764   22:42:22 sma.sma_discovery -- common/autotest_common.sh@978 -- # wait 148931
00:15:25.048   22:42:25 sma.sma_discovery -- sma/discovery.sh@30 -- # killprocess 148932
00:15:25.048   22:42:25 sma.sma_discovery -- common/autotest_common.sh@954 -- # '[' -z 148932 ']'
00:15:25.048   22:42:25 sma.sma_discovery -- common/autotest_common.sh@958 -- # kill -0 148932
00:15:25.048    22:42:25 sma.sma_discovery -- common/autotest_common.sh@959 -- # uname
00:15:25.048   22:42:25 sma.sma_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:25.048    22:42:25 sma.sma_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 148932
00:15:25.048   22:42:25 sma.sma_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:15:25.048   22:42:25 sma.sma_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:15:25.048   22:42:25 sma.sma_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 148932'
00:15:25.048  killing process with pid 148932
00:15:25.048   22:42:25 sma.sma_discovery -- common/autotest_common.sh@973 -- # kill 148932
00:15:25.048   22:42:25 sma.sma_discovery -- common/autotest_common.sh@978 -- # wait 148932
00:15:27.577   22:42:27 sma.sma_discovery -- sma/discovery.sh@450 -- # trap - SIGINT SIGTERM EXIT
00:15:27.577  
00:15:27.577  real	1m2.480s
00:15:27.577  user	3m18.081s
00:15:27.577  sys	0m7.954s
00:15:27.577   22:42:27 sma.sma_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:27.577   22:42:27 sma.sma_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:27.577  ************************************
00:15:27.577  END TEST sma_discovery
00:15:27.577  ************************************
00:15:27.577   22:42:27 sma -- sma/sma.sh@15 -- # run_test sma_vhost /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/vhost_blk.sh
00:15:27.577   22:42:27 sma -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:15:27.577   22:42:27 sma -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:27.577   22:42:27 sma -- common/autotest_common.sh@10 -- # set +x
00:15:27.577  ************************************
00:15:27.577  START TEST sma_vhost
00:15:27.577  ************************************
00:15:27.577   22:42:27 sma.sma_vhost -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/vhost_blk.sh
00:15:27.577  * Looking for test storage...
00:15:27.577  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:15:27.577    22:42:27 sma.sma_vhost -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:15:27.577     22:42:27 sma.sma_vhost -- common/autotest_common.sh@1711 -- # lcov --version
00:15:27.577     22:42:27 sma.sma_vhost -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:15:27.577    22:42:27 sma.sma_vhost -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:15:27.577    22:42:27 sma.sma_vhost -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:15:27.577    22:42:27 sma.sma_vhost -- scripts/common.sh@333 -- # local ver1 ver1_l
00:15:27.577    22:42:27 sma.sma_vhost -- scripts/common.sh@334 -- # local ver2 ver2_l
00:15:27.577    22:42:27 sma.sma_vhost -- scripts/common.sh@336 -- # IFS=.-:
00:15:27.577    22:42:27 sma.sma_vhost -- scripts/common.sh@336 -- # read -ra ver1
00:15:27.577    22:42:27 sma.sma_vhost -- scripts/common.sh@337 -- # IFS=.-:
00:15:27.577    22:42:27 sma.sma_vhost -- scripts/common.sh@337 -- # read -ra ver2
00:15:27.577    22:42:27 sma.sma_vhost -- scripts/common.sh@338 -- # local 'op=<'
00:15:27.577    22:42:27 sma.sma_vhost -- scripts/common.sh@340 -- # ver1_l=2
00:15:27.577    22:42:27 sma.sma_vhost -- scripts/common.sh@341 -- # ver2_l=1
00:15:27.577    22:42:27 sma.sma_vhost -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:15:27.577    22:42:27 sma.sma_vhost -- scripts/common.sh@344 -- # case "$op" in
00:15:27.577    22:42:27 sma.sma_vhost -- scripts/common.sh@345 -- # : 1
00:15:27.577    22:42:27 sma.sma_vhost -- scripts/common.sh@364 -- # (( v = 0 ))
00:15:27.577    22:42:27 sma.sma_vhost -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:15:27.577     22:42:27 sma.sma_vhost -- scripts/common.sh@365 -- # decimal 1
00:15:27.577     22:42:27 sma.sma_vhost -- scripts/common.sh@353 -- # local d=1
00:15:27.577     22:42:27 sma.sma_vhost -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:27.577     22:42:27 sma.sma_vhost -- scripts/common.sh@355 -- # echo 1
00:15:27.577    22:42:27 sma.sma_vhost -- scripts/common.sh@365 -- # ver1[v]=1
00:15:27.577     22:42:27 sma.sma_vhost -- scripts/common.sh@366 -- # decimal 2
00:15:27.577     22:42:27 sma.sma_vhost -- scripts/common.sh@353 -- # local d=2
00:15:27.577     22:42:27 sma.sma_vhost -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:15:27.577     22:42:27 sma.sma_vhost -- scripts/common.sh@355 -- # echo 2
00:15:27.577    22:42:27 sma.sma_vhost -- scripts/common.sh@366 -- # ver2[v]=2
00:15:27.577    22:42:27 sma.sma_vhost -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:15:27.577    22:42:27 sma.sma_vhost -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:15:27.577    22:42:27 sma.sma_vhost -- scripts/common.sh@368 -- # return 0
00:15:27.577    22:42:27 sma.sma_vhost -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:15:27.577    22:42:27 sma.sma_vhost -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:15:27.577  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:27.577  		--rc genhtml_branch_coverage=1
00:15:27.577  		--rc genhtml_function_coverage=1
00:15:27.577  		--rc genhtml_legend=1
00:15:27.577  		--rc geninfo_all_blocks=1
00:15:27.577  		--rc geninfo_unexecuted_blocks=1
00:15:27.577  		
00:15:27.577  		'
00:15:27.577    22:42:27 sma.sma_vhost -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:15:27.577  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:27.577  		--rc genhtml_branch_coverage=1
00:15:27.577  		--rc genhtml_function_coverage=1
00:15:27.577  		--rc genhtml_legend=1
00:15:27.577  		--rc geninfo_all_blocks=1
00:15:27.577  		--rc geninfo_unexecuted_blocks=1
00:15:27.577  		
00:15:27.577  		'
00:15:27.577    22:42:27 sma.sma_vhost -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:15:27.577  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:27.577  		--rc genhtml_branch_coverage=1
00:15:27.577  		--rc genhtml_function_coverage=1
00:15:27.577  		--rc genhtml_legend=1
00:15:27.577  		--rc geninfo_all_blocks=1
00:15:27.577  		--rc geninfo_unexecuted_blocks=1
00:15:27.577  		
00:15:27.577  		'
00:15:27.577    22:42:27 sma.sma_vhost -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:15:27.577  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:27.577  		--rc genhtml_branch_coverage=1
00:15:27.577  		--rc genhtml_function_coverage=1
00:15:27.577  		--rc genhtml_legend=1
00:15:27.577  		--rc geninfo_all_blocks=1
00:15:27.577  		--rc geninfo_unexecuted_blocks=1
00:15:27.577  		
00:15:27.577  		'
00:15:27.577   22:42:27 sma.sma_vhost -- sma/vhost_blk.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:15:27.577    22:42:27 sma.sma_vhost -- vhost/common.sh@6 -- # : false
00:15:27.577    22:42:28 sma.sma_vhost -- vhost/common.sh@7 -- # : /root/vhost_test
00:15:27.577    22:42:28 sma.sma_vhost -- vhost/common.sh@8 -- # : /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:15:27.577    22:42:28 sma.sma_vhost -- vhost/common.sh@9 -- # : qemu-img
00:15:27.577     22:42:28 sma.sma_vhost -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:15:27.577    22:42:28 sma.sma_vhost -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:15:27.577    22:42:28 sma.sma_vhost -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:15:27.577    22:42:28 sma.sma_vhost -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:15:27.577    22:42:28 sma.sma_vhost -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:15:27.577    22:42:28 sma.sma_vhost -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:15:27.577    22:42:28 sma.sma_vhost -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:15:27.577      22:42:28 sma.sma_vhost -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/vhost_blk.sh
00:15:27.577     22:42:28 sma.sma_vhost -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:15:27.577    22:42:28 sma.sma_vhost -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:15:27.577    22:42:28 sma.sma_vhost -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:15:27.577    22:42:28 sma.sma_vhost -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:15:27.577    22:42:28 sma.sma_vhost -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:15:27.577    22:42:28 sma.sma_vhost -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:15:27.577    22:42:28 sma.sma_vhost -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:15:27.577     22:42:28 sma.sma_vhost -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:15:27.577     22:42:28 sma.sma_vhost -- common/autotest.config@2 -- # vhost_0_main_core=0
00:15:27.577     22:42:28 sma.sma_vhost -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:15:27.577     22:42:28 sma.sma_vhost -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:15:27.577     22:42:28 sma.sma_vhost -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:15:27.577     22:42:28 sma.sma_vhost -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:15:27.577     22:42:28 sma.sma_vhost -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:15:27.577     22:42:28 sma.sma_vhost -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:15:27.577     22:42:28 sma.sma_vhost -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:15:27.577     22:42:28 sma.sma_vhost -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:15:27.577     22:42:28 sma.sma_vhost -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:15:27.577     22:42:28 sma.sma_vhost -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:15:27.577     22:42:28 sma.sma_vhost -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:15:27.577     22:42:28 sma.sma_vhost -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:15:27.577     22:42:28 sma.sma_vhost -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:15:27.577     22:42:28 sma.sma_vhost -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:15:27.577     22:42:28 sma.sma_vhost -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:15:27.577     22:42:28 sma.sma_vhost -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:15:27.577     22:42:28 sma.sma_vhost -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:15:27.577     22:42:28 sma.sma_vhost -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:15:27.577     22:42:28 sma.sma_vhost -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:15:27.577     22:42:28 sma.sma_vhost -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:15:27.577     22:42:28 sma.sma_vhost -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:15:27.577     22:42:28 sma.sma_vhost -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:15:27.577     22:42:28 sma.sma_vhost -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:15:27.577     22:42:28 sma.sma_vhost -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:15:27.577    22:42:28 sma.sma_vhost -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:15:27.577     22:42:28 sma.sma_vhost -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:15:27.577     22:42:28 sma.sma_vhost -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:15:27.577     22:42:28 sma.sma_vhost -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:15:27.578     22:42:28 sma.sma_vhost -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:15:27.578     22:42:28 sma.sma_vhost -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:15:27.578     22:42:28 sma.sma_vhost -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:15:27.578      22:42:28 sma.sma_vhost -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:15:27.578       22:42:28 sma.sma_vhost -- scheduler/cgroups.sh@244 -- # check_cgroup
00:15:27.578       22:42:28 sma.sma_vhost -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:15:27.578       22:42:28 sma.sma_vhost -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:15:27.578       22:42:28 sma.sma_vhost -- scheduler/cgroups.sh@10 -- # echo 2
00:15:27.578      22:42:28 sma.sma_vhost -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:15:27.578   22:42:28 sma.sma_vhost -- sma/vhost_blk.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:15:27.578   22:42:28 sma.sma_vhost -- sma/vhost_blk.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:15:27.578   22:42:28 sma.sma_vhost -- sma/vhost_blk.sh@49 -- # vm_no=0
00:15:27.578   22:42:28 sma.sma_vhost -- sma/vhost_blk.sh@50 -- # bus_size=32
00:15:27.578   22:42:28 sma.sma_vhost -- sma/vhost_blk.sh@52 -- # timing_enter setup_vm
00:15:27.578   22:42:28 sma.sma_vhost -- common/autotest_common.sh@726 -- # xtrace_disable
00:15:27.578   22:42:28 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:15:27.578   22:42:28 sma.sma_vhost -- sma/vhost_blk.sh@54 -- # vm_setup --force=0 --disk-type=virtio '--qemu-args=-qmp tcp:localhost:9090,server,nowait -device pci-bridge,chassis_nr=1,id=pci.spdk.0 -device pci-bridge,chassis_nr=2,id=pci.spdk.1' --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@518 -- # xtrace_disable
00:15:27.578   22:42:28 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:15:27.578  INFO: Creating new VM in /root/vhost_test/vms/0
00:15:27.578  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:15:27.578  INFO: TASK MASK: 1-2
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@671 -- # local node_num=0
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@672 -- # local boot_disk_present=false
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@60 -- # local verbose_out
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@61 -- # false
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@62 -- # verbose_out=
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@69 -- # local msg_type=INFO
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@70 -- # shift
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:15:27.578  INFO: NUMA NODE: 0
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@677 -- # [[ -n '' ]]
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@686 -- # [[ -z '' ]]
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@691 -- # (( 0 == 0 ))
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@691 -- # [[ virtio == virtio* ]]
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@692 -- # disks=("default_virtio.img")
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@701 -- # IFS=,
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@701 -- # read -r disk disk_type _
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@702 -- # [[ -z '' ]]
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@702 -- # disk_type=virtio
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@704 -- # case $disk_type in
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@706 -- # local raw_name=RAWSCSI
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@707 -- # local raw_disk=/root/vhost_test/vms/0/test.img
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@710 -- # [[ -f default_virtio.img ]]
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@714 -- # notice 'Creating Virtio disc /root/vhost_test/vms/0/test.img'
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@94 -- # message INFO 'Creating Virtio disc /root/vhost_test/vms/0/test.img'
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@60 -- # local verbose_out
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@61 -- # false
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@62 -- # verbose_out=
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@69 -- # local msg_type=INFO
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@70 -- # shift
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@71 -- # echo -e 'INFO: Creating Virtio disc /root/vhost_test/vms/0/test.img'
00:15:27.578  INFO: Creating Virtio disc /root/vhost_test/vms/0/test.img
00:15:27.578   22:42:28 sma.sma_vhost -- vhost/common.sh@715 -- # dd if=/dev/zero of=/root/vhost_test/vms/0/test.img bs=1024k count=1024
00:15:27.837  1024+0 records in
00:15:27.837  1024+0 records out
00:15:27.837  1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.459562 s, 2.3 GB/s
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@718 -- # cmd+=(-device "virtio-scsi-pci,num_queues=$queue_number")
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@719 -- # cmd+=(-device "scsi-hd,drive=hd$i,vendor=$raw_name")
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@720 -- # cmd+=(-drive "if=none,id=hd$i,file=$raw_disk,format=raw$raw_cache")
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@780 -- # [[ -n '' ]]
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@785 -- # (( 1 ))
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@785 -- # cmd+=("${qemu_args[@]}")
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/0/run.sh'
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/0/run.sh'
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@60 -- # local verbose_out
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@61 -- # false
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@62 -- # verbose_out=
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@69 -- # local msg_type=INFO
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@70 -- # shift
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/0/run.sh'
00:15:27.837  INFO: Saving to /root/vhost_test/vms/0/run.sh
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@787 -- # cat
00:15:27.837    22:42:28 sma.sma_vhost -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 1-2 /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :100 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10002,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/0/qemu.pid -serial file:/root/vhost_test/vms/0/serial.log -D /root/vhost_test/vms/0/qemu.log -chardev file,path=/root/vhost_test/vms/0/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10000-:22,hostfwd=tcp::10001-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device virtio-scsi-pci,num_queues=2 -device scsi-hd,drive=hd,vendor=RAWSCSI -drive if=none,id=hd,file=/root/vhost_test/vms/0/test.img,format=raw '-qmp tcp:localhost:9090,server,nowait -device pci-bridge,chassis_nr=1,id=pci.spdk.0 -device pci-bridge,chassis_nr=2,id=pci.spdk.1'
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/0/run.sh
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@827 -- # echo 10000
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@828 -- # echo 10001
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@829 -- # echo 10002
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/0/migration_port
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@832 -- # [[ -z '' ]]
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@834 -- # echo 10004
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@835 -- # echo 100
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@837 -- # [[ -z '' ]]
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@838 -- # [[ -z '' ]]
00:15:27.837   22:42:28 sma.sma_vhost -- sma/vhost_blk.sh@59 -- # vm_run 0
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@843 -- # local run_all=false
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@844 -- # local vms_to_run=
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@846 -- # getopts a-: optchar
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@856 -- # false
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@859 -- # shift 0
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@860 -- # for vm in "$@"
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@861 -- # vm_num_is_valid 0
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/0/run.sh ]]
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@866 -- # vms_to_run+=' 0'
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@871 -- # vm_is_running 0
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@373 -- # return 1
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/0/run.sh'
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/0/run.sh'
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@60 -- # local verbose_out
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@61 -- # false
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@62 -- # verbose_out=
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@69 -- # local msg_type=INFO
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@70 -- # shift
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/0/run.sh'
00:15:27.837  INFO: running /root/vhost_test/vms/0/run.sh
00:15:27.837   22:42:28 sma.sma_vhost -- vhost/common.sh@877 -- # /root/vhost_test/vms/0/run.sh
00:15:27.837  Running VM in /root/vhost_test/vms/0
00:15:28.097  Waiting for QEMU pid file
00:15:29.474  === qemu.log ===
00:15:29.474  === qemu.log ===
00:15:29.474   22:42:29 sma.sma_vhost -- sma/vhost_blk.sh@60 -- # vm_wait_for_boot 300 0
00:15:29.474   22:42:29 sma.sma_vhost -- vhost/common.sh@913 -- # assert_number 300
00:15:29.474   22:42:29 sma.sma_vhost -- vhost/common.sh@281 -- # [[ 300 =~ [0-9]+ ]]
00:15:29.474   22:42:29 sma.sma_vhost -- vhost/common.sh@281 -- # return 0
00:15:29.474   22:42:29 sma.sma_vhost -- vhost/common.sh@915 -- # xtrace_disable
00:15:29.474   22:42:29 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:15:29.474  INFO: Waiting for VMs to boot
00:15:29.474  INFO: waiting for VM0 (/root/vhost_test/vms/0)
00:15:51.403  
00:15:51.403  INFO: VM0 ready
00:15:51.403  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:51.403  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:51.403  INFO: all VMs ready
00:15:51.403   22:42:51 sma.sma_vhost -- vhost/common.sh@973 -- # return 0
00:15:51.403   22:42:51 sma.sma_vhost -- sma/vhost_blk.sh@61 -- # timing_exit setup_vm
00:15:51.403   22:42:51 sma.sma_vhost -- common/autotest_common.sh@732 -- # xtrace_disable
00:15:51.403   22:42:51 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:15:51.403   22:42:51 sma.sma_vhost -- sma/vhost_blk.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/vhost -S /var/tmp -m 0x3 --wait-for-rpc
00:15:51.403   22:42:51 sma.sma_vhost -- sma/vhost_blk.sh@64 -- # vhostpid=165438
00:15:51.403   22:42:51 sma.sma_vhost -- sma/vhost_blk.sh@66 -- # waitforlisten 165438
00:15:51.403   22:42:51 sma.sma_vhost -- common/autotest_common.sh@835 -- # '[' -z 165438 ']'
00:15:51.403   22:42:51 sma.sma_vhost -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:15:51.403   22:42:51 sma.sma_vhost -- common/autotest_common.sh@840 -- # local max_retries=100
00:15:51.403   22:42:51 sma.sma_vhost -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:15:51.403  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:15:51.403   22:42:51 sma.sma_vhost -- common/autotest_common.sh@844 -- # xtrace_disable
00:15:51.403   22:42:51 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:15:51.403  [2024-12-10 22:42:51.468446] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:15:51.403  [2024-12-10 22:42:51.468553] [ DPDK EAL parameters: vhost --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165438 ]
00:15:51.403  EAL: No free 2048 kB hugepages reported on node 1
00:15:51.403  [2024-12-10 22:42:51.598745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:15:51.403  [2024-12-10 22:42:51.738662] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:15:51.403  [2024-12-10 22:42:51.738674] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:15:51.661   22:42:52 sma.sma_vhost -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:15:51.661   22:42:52 sma.sma_vhost -- common/autotest_common.sh@868 -- # return 0
00:15:51.661   22:42:52 sma.sma_vhost -- sma/vhost_blk.sh@69 -- # rpc_cmd dpdk_cryptodev_scan_accel_module
00:15:51.661   22:42:52 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:51.661   22:42:52 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:15:51.661   22:42:52 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:51.661   22:42:52 sma.sma_vhost -- sma/vhost_blk.sh@70 -- # rpc_cmd dpdk_cryptodev_set_driver -d crypto_aesni_mb
00:15:51.661   22:42:52 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:51.661   22:42:52 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:15:51.661  [2024-12-10 22:42:52.321161] accel_dpdk_cryptodev.c: 224:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb
00:15:51.661   22:42:52 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:51.661   22:42:52 sma.sma_vhost -- sma/vhost_blk.sh@71 -- # rpc_cmd accel_assign_opc -o encrypt -m dpdk_cryptodev
00:15:51.661   22:42:52 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:51.661   22:42:52 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:15:51.661  [2024-12-10 22:42:52.329180] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev
00:15:51.661   22:42:52 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:51.661   22:42:52 sma.sma_vhost -- sma/vhost_blk.sh@72 -- # rpc_cmd accel_assign_opc -o decrypt -m dpdk_cryptodev
00:15:51.661   22:42:52 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:51.661   22:42:52 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:15:51.661  [2024-12-10 22:42:52.337209] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev
00:15:51.661   22:42:52 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:51.661   22:42:52 sma.sma_vhost -- sma/vhost_blk.sh@73 -- # rpc_cmd framework_start_init
00:15:51.661   22:42:52 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:51.661   22:42:52 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:15:51.920  [2024-12-10 22:42:52.584880] accel_dpdk_cryptodev.c:1179:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 1
00:15:52.178   22:42:52 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:52.178   22:42:52 sma.sma_vhost -- sma/vhost_blk.sh@93 -- # smapid=165649
00:15:52.178   22:42:52 sma.sma_vhost -- sma/vhost_blk.sh@96 -- # sma_waitforlisten
00:15:52.178   22:42:52 sma.sma_vhost -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:15:52.178   22:42:52 sma.sma_vhost -- sma/vhost_blk.sh@75 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:15:52.178   22:42:52 sma.sma_vhost -- sma/common.sh@8 -- # local sma_port=8080
00:15:52.178   22:42:52 sma.sma_vhost -- sma/common.sh@10 -- # (( i = 0 ))
00:15:52.178    22:42:52 sma.sma_vhost -- sma/vhost_blk.sh@75 -- # cat
00:15:52.178   22:42:52 sma.sma_vhost -- sma/common.sh@10 -- # (( i < 5 ))
00:15:52.178   22:42:52 sma.sma_vhost -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:52.178   22:42:52 sma.sma_vhost -- sma/common.sh@14 -- # sleep 1s
00:15:52.435  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:52.435  I0000 00:00:1733866973.041015  165649 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:53.369   22:42:53 sma.sma_vhost -- sma/common.sh@10 -- # (( i++ ))
00:15:53.369   22:42:53 sma.sma_vhost -- sma/common.sh@10 -- # (( i < 5 ))
00:15:53.369   22:42:53 sma.sma_vhost -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:53.369   22:42:53 sma.sma_vhost -- sma/common.sh@12 -- # return 0
00:15:53.369    22:42:53 sma.sma_vhost -- sma/vhost_blk.sh@99 -- # vm_exec 0 'lsblk | grep -E "^vd." | wc -l'
00:15:53.369    22:42:53 sma.sma_vhost -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:53.369    22:42:53 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:53.369    22:42:53 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:15:53.369    22:42:53 sma.sma_vhost -- vhost/common.sh@338 -- # local vm_num=0
00:15:53.369    22:42:53 sma.sma_vhost -- vhost/common.sh@339 -- # shift
00:15:53.369     22:42:53 sma.sma_vhost -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:53.369     22:42:53 sma.sma_vhost -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:53.369     22:42:53 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:53.369     22:42:53 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:15:53.369     22:42:53 sma.sma_vhost -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:53.369     22:42:53 sma.sma_vhost -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:53.369    22:42:53 sma.sma_vhost -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'lsblk | grep -E "^vd." | wc -l'
00:15:53.369  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:53.369   22:42:54 sma.sma_vhost -- sma/vhost_blk.sh@99 -- # [[ 0 -eq 0 ]]
00:15:53.369   22:42:54 sma.sma_vhost -- sma/vhost_blk.sh@102 -- # rpc_cmd bdev_null_create null0 100 4096
00:15:53.369   22:42:54 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:53.369   22:42:54 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:15:53.369  null0
00:15:53.369   22:42:54 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:53.369   22:42:54 sma.sma_vhost -- sma/vhost_blk.sh@103 -- # rpc_cmd bdev_null_create null1 100 4096
00:15:53.369   22:42:54 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:53.369   22:42:54 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:15:53.369  null1
00:15:53.369   22:42:54 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:53.369    22:42:54 sma.sma_vhost -- sma/vhost_blk.sh@104 -- # rpc_cmd bdev_get_bdevs -b null0
00:15:53.369    22:42:54 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:53.630    22:42:54 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:15:53.630    22:42:54 sma.sma_vhost -- sma/vhost_blk.sh@104 -- # jq -r '.[].uuid'
00:15:53.630    22:42:54 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:53.630   22:42:54 sma.sma_vhost -- sma/vhost_blk.sh@104 -- # uuid=017f1e14-f810-4b5a-9f4e-1ded894ebe7f
00:15:53.630    22:42:54 sma.sma_vhost -- sma/vhost_blk.sh@105 -- # rpc_cmd bdev_get_bdevs -b null1
00:15:53.630    22:42:54 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:53.630    22:42:54 sma.sma_vhost -- sma/vhost_blk.sh@105 -- # jq -r '.[].uuid'
00:15:53.630    22:42:54 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:15:53.630    22:42:54 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:53.630   22:42:54 sma.sma_vhost -- sma/vhost_blk.sh@105 -- # uuid2=6848ce43-6340-4c37-947e-c07df7f47c42
00:15:53.630    22:42:54 sma.sma_vhost -- sma/vhost_blk.sh@108 -- # create_device 0 017f1e14-f810-4b5a-9f4e-1ded894ebe7f
00:15:53.630    22:42:54 sma.sma_vhost -- sma/vhost_blk.sh@108 -- # jq -r .handle
00:15:53.630    22:42:54 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:53.630     22:42:54 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 017f1e14-f810-4b5a-9f4e-1ded894ebe7f
00:15:53.630     22:42:54 sma.sma_vhost -- sma/common.sh@20 -- # python
00:15:53.888  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:53.888  I0000 00:00:1733866974.500475  165906 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:53.888  I0000 00:00:1733866974.502052  165906 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:53.888  I0000 00:00:1733866974.503333  166101 subchannel.cc:806] subchannel 0x55cf864d0de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55cf86370840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55cf864eada0, grpc.internal.client_channel_call_destination=0x7f64df971390, grpc.internal.event_engine=0x55cf861ef030, grpc.internal.security_connector=0x55cf864822b0, grpc.internal.subchannel_pool=0x55cf8633f690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55cf8605c9a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:42:54.502921777+01:00"}), backing off for 1000 ms
00:15:53.888  VHOST_CONFIG: (/var/tmp/sma-0) vhost-user server: socket created, fd: 232
00:15:53.888  VHOST_CONFIG: (/var/tmp/sma-0) binding succeeded
00:15:54.826  VHOST_CONFIG: (/var/tmp/sma-0) new vhost user connection is 59
00:15:54.826  VHOST_CONFIG: (/var/tmp/sma-0) new device, handle is 0
00:15:54.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:15:54.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_PROTOCOL_FEATURES
00:15:54.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_PROTOCOL_FEATURES
00:15:54.826  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Vhost-user protocol features: 0x11ebf
00:15:54.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_QUEUE_NUM
00:15:54.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_BACKEND_REQ_FD
00:15:54.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_OWNER
00:15:54.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:15:54.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:15:54.826  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:236
00:15:54.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:15:54.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:15:54.826  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:237
00:15:54.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:15:54.826  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_CONFIG
00:15:54.826   22:42:55 sma.sma_vhost -- sma/vhost_blk.sh@108 -- # devid0=virtio_blk:sma-0
00:15:54.826   22:42:55 sma.sma_vhost -- sma/vhost_blk.sh@109 -- # rpc_cmd vhost_get_controllers -n sma-0
00:15:54.826   22:42:55 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:54.826   22:42:55 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:15:54.826  [
00:15:54.826  {
00:15:54.826  "ctrlr": "sma-0",
00:15:54.826  "cpumask": "0x3",
00:15:54.826  "delay_base_us": 0,
00:15:54.826  "iops_threshold": 60000,
00:15:54.826  "socket": "/var/tmp/sma-0",
00:15:54.826  "sessions": [
00:15:54.826  {
00:15:54.826  "vid": 0,
00:15:54.826  "id": 0,
00:15:54.826  "name": "sma-0s0",
00:15:54.826  "started": false,
00:15:54.826  "max_queues": 0,
00:15:54.826  "inflight_task_cnt": 0
00:15:54.826  }
00:15:54.826  ],
00:15:54.826  "backend_specific": {
00:15:54.826  "block": {
00:15:54.826  "readonly": false,
00:15:54.826  "bdev": "null0",
00:15:54.826  "transport": "vhost_user_blk"
00:15:54.826  }
00:15:54.826  }
00:15:54.826  }
00:15:54.826  ]
00:15:54.826   22:42:55 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:55.085    22:42:55 sma.sma_vhost -- sma/vhost_blk.sh@111 -- # create_device 1 6848ce43-6340-4c37-947e-c07df7f47c42
00:15:55.085    22:42:55 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:55.085    22:42:55 sma.sma_vhost -- sma/vhost_blk.sh@111 -- # jq -r .handle
00:15:55.085     22:42:55 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 6848ce43-6340-4c37-947e-c07df7f47c42
00:15:55.085     22:42:55 sma.sma_vhost -- sma/common.sh@20 -- # python
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150005446
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000008):
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_INFLIGHT_FD
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd num_queues: 2
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd queue_size: 128
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_size: 4224
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_offset: 0
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) send inflight fd: 58
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_INFLIGHT_FD
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_size: 4224
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_offset: 0
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd num_queues: 2
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd queue_size: 128
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd fd: 238
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd pervq_inflight_size: 2112
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:58
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:236
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150005446
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_MEM_TABLE
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) guest memory region size: 0x40000000
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest physical addr: 0x0
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest virtual  addr: 0x7f03c7e00000
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) 	 host  virtual  addr: 0x7ff98b600000
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap addr : 0x7ff98b600000
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap size : 0x40000000
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap align: 0x200000
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap off  : 0x0
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 last_used_idx:0 last_avail_idx:0.
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:0 file:239
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 last_used_idx:0 last_avail_idx:0.
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:1 file:240
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 0
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 1
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x0000000f):
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 1
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 1
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 1
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:15:55.085  VHOST_CONFIG: (/var/tmp/sma-0) virtio is now ready for processing.
00:15:55.344  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:55.344  I0000 00:00:1733866975.900368  166321 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:55.344  I0000 00:00:1733866975.902014  166321 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:55.344  I0000 00:00:1733866975.903431  166337 subchannel.cc:806] subchannel 0x5574147f7de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x557414697840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x557414811da0, grpc.internal.client_channel_call_destination=0x7f136da11390, grpc.internal.event_engine=0x557414516060, grpc.internal.security_connector=0x5574147a92b0, grpc.internal.subchannel_pool=0x557414666690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5574143839a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:42:55.902920289+01:00"}), backing off for 1000 ms
00:15:55.344  VHOST_CONFIG: (/var/tmp/sma-1) vhost-user server: socket created, fd: 243
00:15:55.344  VHOST_CONFIG: (/var/tmp/sma-1) binding succeeded
00:15:55.911  VHOST_CONFIG: (/var/tmp/sma-1) new vhost user connection is 241
00:15:55.911  VHOST_CONFIG: (/var/tmp/sma-1) new device, handle is 1
00:15:55.911  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_FEATURES
00:15:55.911  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_PROTOCOL_FEATURES
00:15:55.911  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_PROTOCOL_FEATURES
00:15:55.911  VHOST_CONFIG: (/var/tmp/sma-1) negotiated Vhost-user protocol features: 0x11ebf
00:15:55.911  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_QUEUE_NUM
00:15:55.911  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_BACKEND_REQ_FD
00:15:55.911  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_OWNER
00:15:55.911  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_FEATURES
00:15:55.911  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_CALL
00:15:55.911  VHOST_CONFIG: (/var/tmp/sma-1) vring call idx:0 file:245
00:15:55.911  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ERR
00:15:55.911  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_CALL
00:15:55.911  VHOST_CONFIG: (/var/tmp/sma-1) vring call idx:1 file:246
00:15:55.911  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ERR
00:15:55.911  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_CONFIG
00:15:55.911   22:42:56 sma.sma_vhost -- sma/vhost_blk.sh@111 -- # devid1=virtio_blk:sma-1
00:15:55.911   22:42:56 sma.sma_vhost -- sma/vhost_blk.sh@112 -- # rpc_cmd vhost_get_controllers -n sma-0
00:15:55.911   22:42:56 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:55.911   22:42:56 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:15:56.169  [
00:15:56.169  {
00:15:56.169  "ctrlr": "sma-0",
00:15:56.169  "cpumask": "0x3",
00:15:56.169  "delay_base_us": 0,
00:15:56.169  "iops_threshold": 60000,
00:15:56.169  "socket": "/var/tmp/sma-0",
00:15:56.169  "sessions": [
00:15:56.169  {
00:15:56.169  "vid": 0,
00:15:56.169  "id": 0,
00:15:56.169  "name": "sma-0s0",
00:15:56.169  "started": true,
00:15:56.169  "max_queues": 2,
00:15:56.169  "inflight_task_cnt": 0
00:15:56.169  }
00:15:56.169  ],
00:15:56.169  "backend_specific": {
00:15:56.169  "block": {
00:15:56.169  "readonly": false,
00:15:56.169  "bdev": "null0",
00:15:56.169  "transport": "vhost_user_blk"
00:15:56.169  }
00:15:56.169  }
00:15:56.169  }
00:15:56.169  ]
00:15:56.169   22:42:56 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:56.169   22:42:56 sma.sma_vhost -- sma/vhost_blk.sh@113 -- # rpc_cmd vhost_get_controllers -n sma-1
00:15:56.169   22:42:56 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:56.169   22:42:56 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:15:56.169  [
00:15:56.169  {
00:15:56.169  "ctrlr": "sma-1",
00:15:56.169  "cpumask": "0x3",
00:15:56.169  "delay_base_us": 0,
00:15:56.169  "iops_threshold": 60000,
00:15:56.169  "socket": "/var/tmp/sma-1",
00:15:56.169  "sessions": [
00:15:56.169  {
00:15:56.169  "vid": 1,
00:15:56.169  "id": 0,
00:15:56.169  "name": "sma-1s1",
00:15:56.169  "started": false,
00:15:56.169  "max_queues": 0,
00:15:56.169  "inflight_task_cnt": 0
00:15:56.169  }
00:15:56.169  ],
00:15:56.169  "backend_specific": {
00:15:56.169  "block": {
00:15:56.169  "readonly": false,
00:15:56.169  "bdev": "null1",
00:15:56.169  "transport": "vhost_user_blk"
00:15:56.170  }
00:15:56.170  }
00:15:56.170  }
00:15:56.170  ]
00:15:56.170   22:42:56 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:56.170   22:42:56 sma.sma_vhost -- sma/vhost_blk.sh@114 -- # [[ virtio_blk:sma-0 != \v\i\r\t\i\o\_\b\l\k\:\s\m\a\-\1 ]]
00:15:56.170    22:42:56 sma.sma_vhost -- sma/vhost_blk.sh@117 -- # rpc_cmd vhost_get_controllers
00:15:56.170    22:42:56 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:56.170    22:42:56 sma.sma_vhost -- sma/vhost_blk.sh@117 -- # jq -r '. | length'
00:15:56.170    22:42:56 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_FEATURES
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) negotiated Virtio features: 0x150005446
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_STATUS
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_STATUS
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) new device status(0x00000008):
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) 	-RESET: 0
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) 	-ACKNOWLEDGE: 0
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) 	-DRIVER: 0
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) 	-FEATURES_OK: 1
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) 	-DRIVER_OK: 0
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) 	-DEVICE_NEED_RESET: 0
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) 	-FAILED: 0
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_INFLIGHT_FD
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) get_inflight_fd num_queues: 2
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) get_inflight_fd queue_size: 128
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) send inflight mmap_size: 4224
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) send inflight mmap_offset: 0
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) send inflight fd: 247
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_INFLIGHT_FD
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) set_inflight_fd mmap_size: 4224
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) set_inflight_fd mmap_offset: 0
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) set_inflight_fd num_queues: 2
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) set_inflight_fd queue_size: 128
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) set_inflight_fd fd: 248
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) set_inflight_fd pervq_inflight_size: 2112
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_CALL
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) vring call idx:0 file:247
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_CALL
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) vring call idx:1 file:245
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_FEATURES
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) negotiated Virtio features: 0x150005446
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_STATUS
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_MEM_TABLE
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) guest memory region size: 0x40000000
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) 	 guest physical addr: 0x0
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) 	 guest virtual  addr: 0x7f03c7e00000
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) 	 host  virtual  addr: 0x7ff94b600000
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) 	 mmap addr : 0x7ff94b600000
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) 	 mmap size : 0x40000000
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) 	 mmap align: 0x200000
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) 	 mmap off  : 0x0
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_NUM
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_BASE
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) vring base idx:0 last_used_idx:0 last_avail_idx:0.
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ADDR
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_KICK
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) vring kick idx:0 file:249
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_NUM
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_BASE
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) vring base idx:1 last_used_idx:0 last_avail_idx:0.
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ADDR
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_KICK
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) vring kick idx:1 file:60
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ENABLE
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) set queue enable: 1 to qp idx: 0
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ENABLE
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) set queue enable: 1 to qp idx: 1
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_STATUS
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_STATUS
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) new device status(0x0000000f):
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) 	-RESET: 0
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) 	-ACKNOWLEDGE: 1
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) 	-DRIVER: 1
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) 	-FEATURES_OK: 1
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) 	-DRIVER_OK: 1
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) 	-DEVICE_NEED_RESET: 0
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) 	-FAILED: 0
00:15:56.170  VHOST_CONFIG: (/var/tmp/sma-1) virtio is now ready for processing.
00:15:56.170    22:42:56 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:56.170   22:42:56 sma.sma_vhost -- sma/vhost_blk.sh@117 -- # [[ 2 -eq 2 ]]
00:15:56.170    22:42:56 sma.sma_vhost -- sma/vhost_blk.sh@121 -- # create_device 0 017f1e14-f810-4b5a-9f4e-1ded894ebe7f
00:15:56.170    22:42:56 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:56.170    22:42:56 sma.sma_vhost -- sma/vhost_blk.sh@121 -- # jq -r .handle
00:15:56.170     22:42:56 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 017f1e14-f810-4b5a-9f4e-1ded894ebe7f
00:15:56.170     22:42:56 sma.sma_vhost -- sma/common.sh@20 -- # python
00:15:56.429  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:56.429  I0000 00:00:1733866976.999416  166501 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:56.429  I0000 00:00:1733866977.001067  166501 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:56.429  I0000 00:00:1733866977.002448  166577 subchannel.cc:806] subchannel 0x560c3c548de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x560c3c3e8840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x560c3c562da0, grpc.internal.client_channel_call_destination=0x7f370d211390, grpc.internal.event_engine=0x560c3c267030, grpc.internal.security_connector=0x560c3c4fa2b0, grpc.internal.subchannel_pool=0x560c3c3b7690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x560c3c0d49a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:42:57.001950614+01:00"}), backing off for 1000 ms
00:15:56.429   22:42:57 sma.sma_vhost -- sma/vhost_blk.sh@121 -- # tmp0=virtio_blk:sma-0
00:15:56.429    22:42:57 sma.sma_vhost -- sma/vhost_blk.sh@122 -- # create_device 1 6848ce43-6340-4c37-947e-c07df7f47c42
00:15:56.429    22:42:57 sma.sma_vhost -- sma/vhost_blk.sh@122 -- # jq -r .handle
00:15:56.429    22:42:57 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:56.429     22:42:57 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 6848ce43-6340-4c37-947e-c07df7f47c42
00:15:56.429     22:42:57 sma.sma_vhost -- sma/common.sh@20 -- # python
00:15:56.687  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:56.687  I0000 00:00:1733866977.370710  166602 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:56.687  I0000 00:00:1733866977.372407  166602 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:56.687  I0000 00:00:1733866977.373864  166609 subchannel.cc:806] subchannel 0x55d921eb7de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55d921d57840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55d921ed1da0, grpc.internal.client_channel_call_destination=0x7f83976bc390, grpc.internal.event_engine=0x55d921bd6060, grpc.internal.security_connector=0x55d921e692b0, grpc.internal.subchannel_pool=0x55d921d26690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55d921a439a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:42:57.373299996+01:00"}), backing off for 1000 ms
00:15:56.687   22:42:57 sma.sma_vhost -- sma/vhost_blk.sh@122 -- # tmp1=virtio_blk:sma-1
00:15:56.687   22:42:57 sma.sma_vhost -- sma/vhost_blk.sh@125 -- # NOT create_device 1 017f1e14-f810-4b5a-9f4e-1ded894ebe7f
00:15:56.687   22:42:57 sma.sma_vhost -- common/autotest_common.sh@652 -- # local es=0
00:15:56.687   22:42:57 sma.sma_vhost -- common/autotest_common.sh@654 -- # valid_exec_arg create_device 1 017f1e14-f810-4b5a-9f4e-1ded894ebe7f
00:15:56.687   22:42:57 sma.sma_vhost -- common/autotest_common.sh@640 -- # local arg=create_device
00:15:56.687   22:42:57 sma.sma_vhost -- sma/vhost_blk.sh@125 -- # jq -r .handle
00:15:56.687   22:42:57 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:56.687    22:42:57 sma.sma_vhost -- common/autotest_common.sh@644 -- # type -t create_device
00:15:56.687   22:42:57 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:56.687   22:42:57 sma.sma_vhost -- common/autotest_common.sh@655 -- # create_device 1 017f1e14-f810-4b5a-9f4e-1ded894ebe7f
00:15:56.687   22:42:57 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:56.687    22:42:57 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 017f1e14-f810-4b5a-9f4e-1ded894ebe7f
00:15:56.687    22:42:57 sma.sma_vhost -- sma/common.sh@20 -- # python
00:15:56.946  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:56.946  I0000 00:00:1733866977.685925  166632 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:56.946  I0000 00:00:1733866977.687496  166632 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:56.946  I0000 00:00:1733866977.688817  166635 subchannel.cc:806] subchannel 0x55d03975cde0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55d0395fc840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55d039776da0, grpc.internal.client_channel_call_destination=0x7f90a7205390, grpc.internal.event_engine=0x55d03947b060, grpc.internal.security_connector=0x55d03970e2b0, grpc.internal.subchannel_pool=0x55d0395cb690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55d0392e89a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:42:57.688331005+01:00"}), backing off for 1000 ms
00:15:57.204  Traceback (most recent call last):
00:15:57.204    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:15:57.204      main(sys.argv[1:])
00:15:57.204    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:15:57.204      result = client.call(request['method'], request.get('params', {}))
00:15:57.204               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:15:57.204    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:15:57.204      response = func(request=json_format.ParseDict(params, input()))
00:15:57.204                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:15:57.204    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:15:57.204      return _end_unary_response_blocking(state, call, False, None)
00:15:57.204             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:15:57.204    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:15:57.204      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:15:57.204      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:15:57.204  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:15:57.204  	status = StatusCode.INTERNAL
00:15:57.204  	details = "Failed to create vhost device"
00:15:57.204  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-12-10T22:42:57.73615332+01:00", grpc_status:13, grpc_message:"Failed to create vhost device"}"
00:15:57.204  >
00:15:57.204   22:42:57 sma.sma_vhost -- common/autotest_common.sh@655 -- # es=1
00:15:57.204   22:42:57 sma.sma_vhost -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:15:57.204   22:42:57 sma.sma_vhost -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:15:57.204   22:42:57 sma.sma_vhost -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:15:57.204    22:42:57 sma.sma_vhost -- sma/vhost_blk.sh@128 -- # vm_exec 0 'lsblk | grep -E "^vd." | wc -l'
00:15:57.204    22:42:57 sma.sma_vhost -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:57.204    22:42:57 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:57.204    22:42:57 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:15:57.204    22:42:57 sma.sma_vhost -- vhost/common.sh@338 -- # local vm_num=0
00:15:57.204    22:42:57 sma.sma_vhost -- vhost/common.sh@339 -- # shift
00:15:57.204     22:42:57 sma.sma_vhost -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:57.204     22:42:57 sma.sma_vhost -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:57.204     22:42:57 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:57.204     22:42:57 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:15:57.204     22:42:57 sma.sma_vhost -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:57.204     22:42:57 sma.sma_vhost -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:57.204    22:42:57 sma.sma_vhost -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'lsblk | grep -E "^vd." | wc -l'
00:15:57.204  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:57.204   22:42:57 sma.sma_vhost -- sma/vhost_blk.sh@128 -- # [[ 2 -eq 2 ]]
00:15:57.204    22:42:57 sma.sma_vhost -- sma/vhost_blk.sh@130 -- # jq -r '. | length'
00:15:57.204    22:42:57 sma.sma_vhost -- sma/vhost_blk.sh@130 -- # rpc_cmd vhost_get_controllers
00:15:57.204    22:42:57 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:57.204    22:42:57 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:15:57.204    22:42:57 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:57.204   22:42:57 sma.sma_vhost -- sma/vhost_blk.sh@130 -- # [[ 2 -eq 2 ]]
00:15:57.204   22:42:57 sma.sma_vhost -- sma/vhost_blk.sh@131 -- # [[ virtio_blk:sma-0 == \v\i\r\t\i\o\_\b\l\k\:\s\m\a\-\0 ]]
00:15:57.204   22:42:57 sma.sma_vhost -- sma/vhost_blk.sh@132 -- # [[ virtio_blk:sma-1 == \v\i\r\t\i\o\_\b\l\k\:\s\m\a\-\1 ]]
00:15:57.204   22:42:57 sma.sma_vhost -- sma/vhost_blk.sh@135 -- # delete_device virtio_blk:sma-0
00:15:57.204   22:42:57 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:57.462  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:57.462  I0000 00:00:1733866978.177149  166837 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:57.462  I0000 00:00:1733866978.178910  166837 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:57.462  I0000 00:00:1733866978.180267  166865 subchannel.cc:806] subchannel 0x55c5fdd21de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55c5fdbc1840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55c5fdd3bda0, grpc.internal.client_channel_call_destination=0x7f0948815390, grpc.internal.event_engine=0x55c5fda40030, grpc.internal.security_connector=0x55c5fdcd32b0, grpc.internal.subchannel_pool=0x55c5fdb90690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55c5fd8ad9a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:42:58.179727248+01:00"}), backing off for 999 ms
00:15:57.462  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:15:57.462  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000000):
00:15:57.462  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 1
00:15:57.462  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:15:57.462  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:15:57.462  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 0
00:15:57.462  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:15:57.462  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:15:57.462  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:15:57.462  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:15:57.462  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 0
00:15:57.462  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:15:57.462  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 1
00:15:57.462  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:15:57.462  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 file:0
00:15:57.462  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:15:57.462  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 file:50
00:15:57.721  VHOST_CONFIG: (/var/tmp/sma-0) vhost peer closed
00:15:57.721  {}
00:15:57.721   22:42:58 sma.sma_vhost -- sma/vhost_blk.sh@136 -- # NOT rpc_cmd vhost_get_controllers -n sma-0
00:15:57.721   22:42:58 sma.sma_vhost -- common/autotest_common.sh@652 -- # local es=0
00:15:57.721   22:42:58 sma.sma_vhost -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd vhost_get_controllers -n sma-0
00:15:57.721   22:42:58 sma.sma_vhost -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:15:57.721   22:42:58 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:57.721    22:42:58 sma.sma_vhost -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:15:57.721   22:42:58 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:57.721   22:42:58 sma.sma_vhost -- common/autotest_common.sh@655 -- # rpc_cmd vhost_get_controllers -n sma-0
00:15:57.721   22:42:58 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:57.721   22:42:58 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:15:57.721  request:
00:15:57.721  {
00:15:57.721  "name": "sma-0",
00:15:57.721  "method": "vhost_get_controllers",
00:15:57.721  "req_id": 1
00:15:57.721  }
00:15:57.721  Got JSON-RPC error response
00:15:57.721  response:
00:15:57.721  {
00:15:57.721  "code": -32603,
00:15:57.721  "message": "No such device"
00:15:57.721  }
00:15:57.721   22:42:58 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:15:57.721   22:42:58 sma.sma_vhost -- common/autotest_common.sh@655 -- # es=1
00:15:57.721   22:42:58 sma.sma_vhost -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:15:57.721   22:42:58 sma.sma_vhost -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:15:57.721   22:42:58 sma.sma_vhost -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:15:57.721    22:42:58 sma.sma_vhost -- sma/vhost_blk.sh@137 -- # rpc_cmd vhost_get_controllers
00:15:57.722    22:42:58 sma.sma_vhost -- sma/vhost_blk.sh@137 -- # jq -r '. | length'
00:15:57.722    22:42:58 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:57.722    22:42:58 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:15:57.722    22:42:58 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:57.722   22:42:58 sma.sma_vhost -- sma/vhost_blk.sh@137 -- # [[ 1 -eq 1 ]]
00:15:57.722   22:42:58 sma.sma_vhost -- sma/vhost_blk.sh@139 -- # delete_device virtio_blk:sma-1
00:15:57.722   22:42:58 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:58.025  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:58.025  I0000 00:00:1733866978.567454  166893 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:58.025  I0000 00:00:1733866978.572306  166893 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:58.025  I0000 00:00:1733866978.573563  166894 subchannel.cc:806] subchannel 0x5565a8a79de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5565a8919840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5565a8a93da0, grpc.internal.client_channel_call_destination=0x7f9b9f94c390, grpc.internal.event_engine=0x5565a8798030, grpc.internal.security_connector=0x5565a8a2b2b0, grpc.internal.subchannel_pool=0x5565a88e8690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5565a86059a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:42:58.57309454+01:00"}), backing off for 1000 ms
00:15:58.025  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_STATUS
00:15:58.025  VHOST_CONFIG: (/var/tmp/sma-1) new device status(0x00000000):
00:15:58.025  VHOST_CONFIG: (/var/tmp/sma-1) 	-RESET: 1
00:15:58.025  VHOST_CONFIG: (/var/tmp/sma-1) 	-ACKNOWLEDGE: 0
00:15:58.025  VHOST_CONFIG: (/var/tmp/sma-1) 	-DRIVER: 0
00:15:58.025  VHOST_CONFIG: (/var/tmp/sma-1) 	-FEATURES_OK: 0
00:15:58.025  VHOST_CONFIG: (/var/tmp/sma-1) 	-DRIVER_OK: 0
00:15:58.025  VHOST_CONFIG: (/var/tmp/sma-1) 	-DEVICE_NEED_RESET: 0
00:15:58.025  VHOST_CONFIG: (/var/tmp/sma-1) 	-FAILED: 0
00:15:58.025  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ENABLE
00:15:58.025  VHOST_CONFIG: (/var/tmp/sma-1) set queue enable: 0 to qp idx: 0
00:15:58.025  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ENABLE
00:15:58.025  VHOST_CONFIG: (/var/tmp/sma-1) set queue enable: 0 to qp idx: 1
00:15:58.025  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_VRING_BASE
00:15:58.025  VHOST_CONFIG: (/var/tmp/sma-1) vring base idx:0 file:14
00:15:58.025  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_VRING_BASE
00:15:58.025  VHOST_CONFIG: (/var/tmp/sma-1) vring base idx:1 file:36
00:15:58.025  VHOST_CONFIG: (/var/tmp/sma-1) vhost peer closed
00:15:58.025  {}
00:15:58.314   22:42:58 sma.sma_vhost -- sma/vhost_blk.sh@140 -- # NOT rpc_cmd vhost_get_controllers -n sma-1
00:15:58.314   22:42:58 sma.sma_vhost -- common/autotest_common.sh@652 -- # local es=0
00:15:58.314   22:42:58 sma.sma_vhost -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd vhost_get_controllers -n sma-1
00:15:58.314   22:42:58 sma.sma_vhost -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:15:58.314   22:42:58 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:58.314    22:42:58 sma.sma_vhost -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:15:58.314   22:42:58 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:58.314   22:42:58 sma.sma_vhost -- common/autotest_common.sh@655 -- # rpc_cmd vhost_get_controllers -n sma-1
00:15:58.314   22:42:58 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:58.314   22:42:58 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:15:58.314  request:
00:15:58.314  {
00:15:58.314  "name": "sma-1",
00:15:58.314  "method": "vhost_get_controllers",
00:15:58.314  "req_id": 1
00:15:58.314  }
00:15:58.314  Got JSON-RPC error response
00:15:58.314  response:
00:15:58.314  {
00:15:58.314  "code": -32603,
00:15:58.314  "message": "No such device"
00:15:58.314  }
00:15:58.314   22:42:58 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:15:58.314   22:42:58 sma.sma_vhost -- common/autotest_common.sh@655 -- # es=1
00:15:58.314   22:42:58 sma.sma_vhost -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:15:58.314   22:42:58 sma.sma_vhost -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:15:58.314   22:42:58 sma.sma_vhost -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:15:58.314    22:42:58 sma.sma_vhost -- sma/vhost_blk.sh@141 -- # rpc_cmd vhost_get_controllers
00:15:58.314    22:42:58 sma.sma_vhost -- sma/vhost_blk.sh@141 -- # jq -r '. | length'
00:15:58.314    22:42:58 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:58.314    22:42:58 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:15:58.314    22:42:58 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:58.314   22:42:58 sma.sma_vhost -- sma/vhost_blk.sh@141 -- # [[ 0 -eq 0 ]]
00:15:58.315   22:42:58 sma.sma_vhost -- sma/vhost_blk.sh@144 -- # delete_device virtio_blk:sma-0
00:15:58.315   22:42:58 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:58.315  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:58.315  I0000 00:00:1733866979.068184  166918 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:58.315  I0000 00:00:1733866979.070021  166918 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:58.315  I0000 00:00:1733866979.071284  166981 subchannel.cc:806] subchannel 0x555f23a40de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x555f238e0840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x555f23a5ada0, grpc.internal.client_channel_call_destination=0x7fe068bf2390, grpc.internal.event_engine=0x555f2375f030, grpc.internal.security_connector=0x555f239f22b0, grpc.internal.subchannel_pool=0x555f238af690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x555f235cc9a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:42:59.070816184+01:00"}), backing off for 999 ms
00:15:58.315  {}
00:15:58.606   22:42:59 sma.sma_vhost -- sma/vhost_blk.sh@145 -- # delete_device virtio_blk:sma-1
00:15:58.606   22:42:59 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:58.606  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:58.606  I0000 00:00:1733866979.305847  167042 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:58.606  I0000 00:00:1733866979.307438  167042 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:58.606  I0000 00:00:1733866979.308510  167145 subchannel.cc:806] subchannel 0x557f386f6de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x557f38596840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x557f38710da0, grpc.internal.client_channel_call_destination=0x7f01022c8390, grpc.internal.event_engine=0x557f38415030, grpc.internal.security_connector=0x557f386a82b0, grpc.internal.subchannel_pool=0x557f38565690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x557f382829a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:42:59.308160541+01:00"}), backing off for 1000 ms
00:15:58.606  {}
00:15:58.606    22:42:59 sma.sma_vhost -- sma/vhost_blk.sh@148 -- # vm_exec 0 'lsblk | grep -E "^vd." | wc -l'
00:15:58.606    22:42:59 sma.sma_vhost -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:58.606    22:42:59 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:58.606    22:42:59 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:15:58.606    22:42:59 sma.sma_vhost -- vhost/common.sh@338 -- # local vm_num=0
00:15:58.606    22:42:59 sma.sma_vhost -- vhost/common.sh@339 -- # shift
00:15:58.606     22:42:59 sma.sma_vhost -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:58.606     22:42:59 sma.sma_vhost -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:58.606     22:42:59 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:58.606     22:42:59 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:15:58.606     22:42:59 sma.sma_vhost -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:58.606     22:42:59 sma.sma_vhost -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:58.606    22:42:59 sma.sma_vhost -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'lsblk | grep -E "^vd." | wc -l'
00:15:58.606  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:58.899   22:42:59 sma.sma_vhost -- sma/vhost_blk.sh@148 -- # [[ 0 -eq 0 ]]
00:15:58.899   22:42:59 sma.sma_vhost -- sma/vhost_blk.sh@150 -- # devids=()
00:15:58.899    22:42:59 sma.sma_vhost -- sma/vhost_blk.sh@153 -- # rpc_cmd bdev_get_bdevs -b null0
00:15:58.899    22:42:59 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:58.899    22:42:59 sma.sma_vhost -- sma/vhost_blk.sh@153 -- # jq -r '.[].uuid'
00:15:58.899    22:42:59 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:15:58.899    22:42:59 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:58.899   22:42:59 sma.sma_vhost -- sma/vhost_blk.sh@153 -- # uuid=017f1e14-f810-4b5a-9f4e-1ded894ebe7f
00:15:58.899    22:42:59 sma.sma_vhost -- sma/vhost_blk.sh@154 -- # create_device 0 017f1e14-f810-4b5a-9f4e-1ded894ebe7f
00:15:58.899    22:42:59 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:58.899    22:42:59 sma.sma_vhost -- sma/vhost_blk.sh@154 -- # jq -r .handle
00:15:58.899     22:42:59 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 017f1e14-f810-4b5a-9f4e-1ded894ebe7f
00:15:58.899     22:42:59 sma.sma_vhost -- sma/common.sh@20 -- # python
00:15:59.193  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:59.193  I0000 00:00:1733866979.811271  167179 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:59.193  I0000 00:00:1733866979.813063  167179 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:59.193  I0000 00:00:1733866979.814395  167185 subchannel.cc:806] subchannel 0x55c2d876fde0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55c2d860f840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55c2d8789da0, grpc.internal.client_channel_call_destination=0x7f73e14b4390, grpc.internal.event_engine=0x55c2d848e030, grpc.internal.security_connector=0x55c2d87212b0, grpc.internal.subchannel_pool=0x55c2d85de690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55c2d82fb9a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:42:59.813953387+01:00"}), backing off for 1000 ms
00:15:59.193  VHOST_CONFIG: (/var/tmp/sma-0) vhost-user server: socket created, fd: 232
00:15:59.193  VHOST_CONFIG: (/var/tmp/sma-0) binding succeeded
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) new vhost user connection is 59
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) new device, handle is 0
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_PROTOCOL_FEATURES
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_PROTOCOL_FEATURES
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Vhost-user protocol features: 0x11ebf
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_QUEUE_NUM
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_BACKEND_REQ_FD
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_OWNER
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:236
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:237
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_CONFIG
00:16:00.140   22:43:00 sma.sma_vhost -- sma/vhost_blk.sh@154 -- # devids[0]=virtio_blk:sma-0
00:16:00.140    22:43:00 sma.sma_vhost -- sma/vhost_blk.sh@155 -- # rpc_cmd bdev_get_bdevs -b null1
00:16:00.140    22:43:00 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:00.140    22:43:00 sma.sma_vhost -- sma/vhost_blk.sh@155 -- # jq -r '.[].uuid'
00:16:00.140    22:43:00 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:00.140    22:43:00 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:00.140   22:43:00 sma.sma_vhost -- sma/vhost_blk.sh@155 -- # uuid=6848ce43-6340-4c37-947e-c07df7f47c42
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150005446
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000008):
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_INFLIGHT_FD
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd num_queues: 2
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd queue_size: 128
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_size: 4224
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_offset: 0
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) send inflight fd: 58
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_INFLIGHT_FD
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_size: 4224
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_offset: 0
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd num_queues: 2
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd queue_size: 128
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd fd: 238
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd pervq_inflight_size: 2112
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:58
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:236
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150005446
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_MEM_TABLE
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) guest memory region size: 0x40000000
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest physical addr: 0x0
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest virtual  addr: 0x7f03c7e00000
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) 	 host  virtual  addr: 0x7ff98b600000
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap addr : 0x7ff98b600000
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap size : 0x40000000
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap align: 0x200000
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap off  : 0x0
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 last_used_idx:0 last_avail_idx:0.
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:0 file:239
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:16:00.140    22:43:00 sma.sma_vhost -- sma/vhost_blk.sh@156 -- # create_device 32 6848ce43-6340-4c37-947e-c07df7f47c42
00:16:00.140    22:43:00 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:00.140    22:43:00 sma.sma_vhost -- sma/vhost_blk.sh@156 -- # jq -r .handle
00:16:00.140     22:43:00 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 6848ce43-6340-4c37-947e-c07df7f47c42
00:16:00.140     22:43:00 sma.sma_vhost -- sma/common.sh@20 -- # python
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 last_used_idx:0 last_avail_idx:0.
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:1 file:240
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 0
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 1
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x0000000f):
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 1
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 1
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 1
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:16:00.140  VHOST_CONFIG: (/var/tmp/sma-0) virtio is now ready for processing.
00:16:00.398  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:00.398  I0000 00:00:1733866981.150974  167422 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:00.398  I0000 00:00:1733866981.152738  167422 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:00.398  I0000 00:00:1733866981.154148  167432 subchannel.cc:806] subchannel 0x560ff388cde0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x560ff372c840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x560ff38a6da0, grpc.internal.client_channel_call_destination=0x7f2dae2ea390, grpc.internal.event_engine=0x560ff35ab060, grpc.internal.security_connector=0x560ff383e2b0, grpc.internal.subchannel_pool=0x560ff36fb690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x560ff34189a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:01.153655517+01:00"}), backing off for 999 ms
00:16:00.657  VHOST_CONFIG: (/var/tmp/sma-32) vhost-user server: socket created, fd: 243
00:16:00.657  VHOST_CONFIG: (/var/tmp/sma-32) binding succeeded
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) new vhost user connection is 241
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) new device, handle is 1
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_FEATURES
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_PROTOCOL_FEATURES
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_PROTOCOL_FEATURES
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) negotiated Vhost-user protocol features: 0x11ebf
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_QUEUE_NUM
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_BACKEND_REQ_FD
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_OWNER
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_FEATURES
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_CALL
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) vring call idx:0 file:245
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ERR
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_CALL
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) vring call idx:1 file:246
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ERR
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_CONFIG
00:16:01.226   22:43:01 sma.sma_vhost -- sma/vhost_blk.sh@156 -- # devids[1]=virtio_blk:sma-32
00:16:01.226    22:43:01 sma.sma_vhost -- sma/vhost_blk.sh@158 -- # vm_exec 0 'lsblk | grep -E "^vd." | wc -l'
00:16:01.226    22:43:01 sma.sma_vhost -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:16:01.226    22:43:01 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:16:01.226    22:43:01 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:16:01.226    22:43:01 sma.sma_vhost -- vhost/common.sh@338 -- # local vm_num=0
00:16:01.226    22:43:01 sma.sma_vhost -- vhost/common.sh@339 -- # shift
00:16:01.226     22:43:01 sma.sma_vhost -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:16:01.226     22:43:01 sma.sma_vhost -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:16:01.226     22:43:01 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:16:01.226     22:43:01 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:16:01.226     22:43:01 sma.sma_vhost -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:16:01.226     22:43:01 sma.sma_vhost -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:16:01.226    22:43:01 sma.sma_vhost -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'lsblk | grep -E "^vd." | wc -l'
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_FEATURES
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) negotiated Virtio features: 0x150005446
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_STATUS
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_STATUS
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) new device status(0x00000008):
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) 	-RESET: 0
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) 	-ACKNOWLEDGE: 0
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) 	-DRIVER: 0
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) 	-FEATURES_OK: 1
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) 	-DRIVER_OK: 0
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) 	-DEVICE_NEED_RESET: 0
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) 	-FAILED: 0
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_INFLIGHT_FD
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) get_inflight_fd num_queues: 2
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) get_inflight_fd queue_size: 128
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) send inflight mmap_size: 4224
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) send inflight mmap_offset: 0
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) send inflight fd: 242
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_INFLIGHT_FD
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) set_inflight_fd mmap_size: 4224
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) set_inflight_fd mmap_offset: 0
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) set_inflight_fd num_queues: 2
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) set_inflight_fd queue_size: 128
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) set_inflight_fd fd: 247
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) set_inflight_fd pervq_inflight_size: 2112
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_CALL
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) vring call idx:0 file:242
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_CALL
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) vring call idx:1 file:245
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_FEATURES
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) negotiated Virtio features: 0x150005446
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_STATUS
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_MEM_TABLE
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) guest memory region size: 0x40000000
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) 	 guest physical addr: 0x0
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) 	 guest virtual  addr: 0x7f03c7e00000
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) 	 host  virtual  addr: 0x7ff94b600000
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) 	 mmap addr : 0x7ff94b600000
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) 	 mmap size : 0x40000000
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) 	 mmap align: 0x200000
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) 	 mmap off  : 0x0
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_NUM
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_BASE
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) vring base idx:0 last_used_idx:0 last_avail_idx:0.
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ADDR
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_KICK
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) vring kick idx:0 file:248
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_NUM
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_BASE
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) vring base idx:1 last_used_idx:0 last_avail_idx:0.
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ADDR
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_KICK
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) vring kick idx:1 file:249
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ENABLE
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) set queue enable: 1 to qp idx: 0
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ENABLE
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) set queue enable: 1 to qp idx: 1
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_STATUS
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_STATUS
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) new device status(0x0000000f):
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) 	-RESET: 0
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) 	-ACKNOWLEDGE: 1
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) 	-DRIVER: 1
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) 	-FEATURES_OK: 1
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) 	-DRIVER_OK: 1
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) 	-DEVICE_NEED_RESET: 0
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) 	-FAILED: 0
00:16:01.226  VHOST_CONFIG: (/var/tmp/sma-32) virtio is now ready for processing.
00:16:01.226  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:16:01.794   22:43:02 sma.sma_vhost -- sma/vhost_blk.sh@158 -- # [[ 2 -eq 2 ]]
00:16:01.794   22:43:02 sma.sma_vhost -- sma/vhost_blk.sh@161 -- # for id in "${devids[@]}"
00:16:01.794   22:43:02 sma.sma_vhost -- sma/vhost_blk.sh@162 -- # delete_device virtio_blk:sma-0
00:16:01.794   22:43:02 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:01.794  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:01.794  I0000 00:00:1733866982.539288  167662 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:01.794  I0000 00:00:1733866982.541096  167662 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:01.794  I0000 00:00:1733866982.542442  167666 subchannel.cc:806] subchannel 0x55801a75fde0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55801a5ff840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55801a779da0, grpc.internal.client_channel_call_destination=0x7f148447e390, grpc.internal.event_engine=0x55801a47e030, grpc.internal.security_connector=0x55801a7112b0, grpc.internal.subchannel_pool=0x55801a5ce690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55801a2eb9a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:02.541929655+01:00"}), backing off for 999 ms
00:16:02.361  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:16:02.361  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000000):
00:16:02.361  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 1
00:16:02.361  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:16:02.361  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:16:02.361  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 0
00:16:02.361  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:16:02.361  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:16:02.361  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:16:02.361  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:16:02.361  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 0
00:16:02.361  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:16:02.361  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 1
00:16:02.361  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:16:02.361  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 file:47
00:16:02.361  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:16:02.361  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 file:3
00:16:02.361  VHOST_CONFIG: (/var/tmp/sma-0) vhost peer closed
00:16:02.361  {}
00:16:02.361   22:43:03 sma.sma_vhost -- sma/vhost_blk.sh@161 -- # for id in "${devids[@]}"
00:16:02.361   22:43:03 sma.sma_vhost -- sma/vhost_blk.sh@162 -- # delete_device virtio_blk:sma-32
00:16:02.361   22:43:03 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:02.620  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:02.620  I0000 00:00:1733866983.247636  167879 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:02.620  I0000 00:00:1733866983.249158  167879 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:02.620  I0000 00:00:1733866983.250522  167888 subchannel.cc:806] subchannel 0x555b753e8de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x555b75288840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x555b75402da0, grpc.internal.client_channel_call_destination=0x7f3a4bd3d390, grpc.internal.event_engine=0x555b75107030, grpc.internal.security_connector=0x555b7539a2b0, grpc.internal.subchannel_pool=0x555b75257690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x555b74f749a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:03.249980618+01:00"}), backing off for 1000 ms
00:16:02.620  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_STATUS
00:16:02.620  VHOST_CONFIG: (/var/tmp/sma-32) new device status(0x00000000):
00:16:02.620  VHOST_CONFIG: (/var/tmp/sma-32) 	-RESET: 1
00:16:02.620  VHOST_CONFIG: (/var/tmp/sma-32) 	-ACKNOWLEDGE: 0
00:16:02.620  VHOST_CONFIG: (/var/tmp/sma-32) 	-DRIVER: 0
00:16:02.620  VHOST_CONFIG: (/var/tmp/sma-32) 	-FEATURES_OK: 0
00:16:02.620  VHOST_CONFIG: (/var/tmp/sma-32) 	-DRIVER_OK: 0
00:16:02.620  VHOST_CONFIG: (/var/tmp/sma-32) 	-DEVICE_NEED_RESET: 0
00:16:02.620  VHOST_CONFIG: (/var/tmp/sma-32) 	-FAILED: 0
00:16:02.620  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ENABLE
00:16:02.620  VHOST_CONFIG: (/var/tmp/sma-32) set queue enable: 0 to qp idx: 0
00:16:02.620  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ENABLE
00:16:02.620  VHOST_CONFIG: (/var/tmp/sma-32) set queue enable: 0 to qp idx: 1
00:16:02.620  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_VRING_BASE
00:16:02.620  VHOST_CONFIG: (/var/tmp/sma-32) vring base idx:0 file:0
00:16:02.620  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_VRING_BASE
00:16:02.620  VHOST_CONFIG: (/var/tmp/sma-32) vring base idx:1 file:50
00:16:02.620  VHOST_CONFIG: (/var/tmp/sma-32) vhost peer closed
00:16:02.620  {}
00:16:02.879    22:43:03 sma.sma_vhost -- sma/vhost_blk.sh@166 -- # vm_exec 0 'lsblk | grep -E "^vd." | wc -l'
00:16:02.879    22:43:03 sma.sma_vhost -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:16:02.879    22:43:03 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:16:02.879    22:43:03 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:16:02.879    22:43:03 sma.sma_vhost -- vhost/common.sh@338 -- # local vm_num=0
00:16:02.879    22:43:03 sma.sma_vhost -- vhost/common.sh@339 -- # shift
00:16:02.879     22:43:03 sma.sma_vhost -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:16:02.879     22:43:03 sma.sma_vhost -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:16:02.879     22:43:03 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:16:02.879     22:43:03 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:16:02.879     22:43:03 sma.sma_vhost -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:16:02.879     22:43:03 sma.sma_vhost -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:16:02.879    22:43:03 sma.sma_vhost -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'lsblk | grep -E "^vd." | wc -l'
00:16:02.879  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:16:02.879   22:43:03 sma.sma_vhost -- sma/vhost_blk.sh@166 -- # [[ 0 -eq 0 ]]
00:16:02.879   22:43:03 sma.sma_vhost -- sma/vhost_blk.sh@168 -- # key0=1234567890abcdef1234567890abcdef
00:16:02.879   22:43:03 sma.sma_vhost -- sma/vhost_blk.sh@169 -- # rpc_cmd bdev_malloc_create -b malloc0 32 4096
00:16:02.879   22:43:03 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:02.879   22:43:03 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:02.879  malloc0
00:16:02.879   22:43:03 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:02.879    22:43:03 sma.sma_vhost -- sma/vhost_blk.sh@170 -- # rpc_cmd bdev_get_bdevs -b malloc0
00:16:02.879    22:43:03 sma.sma_vhost -- sma/vhost_blk.sh@170 -- # jq -r '.[].uuid'
00:16:02.879    22:43:03 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:02.879    22:43:03 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:02.879    22:43:03 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:03.138   22:43:03 sma.sma_vhost -- sma/vhost_blk.sh@170 -- # uuid=305d704b-74c7-4465-b758-7f3dcd2acfb6
00:16:03.138    22:43:03 sma.sma_vhost -- sma/vhost_blk.sh@210 -- # jq -r .handle
00:16:03.138    22:43:03 sma.sma_vhost -- sma/vhost_blk.sh@192 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:03.138     22:43:03 sma.sma_vhost -- sma/vhost_blk.sh@192 -- # uuid2base64 305d704b-74c7-4465-b758-7f3dcd2acfb6
00:16:03.138     22:43:03 sma.sma_vhost -- sma/common.sh@20 -- # python
00:16:03.138     22:43:03 sma.sma_vhost -- sma/vhost_blk.sh@192 -- # get_cipher AES_CBC
00:16:03.138     22:43:03 sma.sma_vhost -- sma/common.sh@27 -- # case "$1" in
00:16:03.138     22:43:03 sma.sma_vhost -- sma/common.sh@28 -- # echo 0
00:16:03.138     22:43:03 sma.sma_vhost -- sma/vhost_blk.sh@192 -- # format_key 1234567890abcdef1234567890abcdef
00:16:03.138     22:43:03 sma.sma_vhost -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/63
00:16:03.138      22:43:03 sma.sma_vhost -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:16:03.396  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:03.396  I0000 00:00:1733866983.981838  167920 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:03.396  I0000 00:00:1733866983.983327  167920 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:03.396  I0000 00:00:1733866983.984724  168124 subchannel.cc:806] subchannel 0x562ef2b79de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x562ef2a19840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x562ef2b93da0, grpc.internal.client_channel_call_destination=0x7f270d01b390, grpc.internal.event_engine=0x562ef2898030, grpc.internal.security_connector=0x562ef2b2b2b0, grpc.internal.subchannel_pool=0x562ef29e8690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x562ef27059a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:03.984271913+01:00"}), backing off for 1000 ms
00:16:03.396  VHOST_CONFIG: (/var/tmp/sma-0) vhost-user server: socket created, fd: 252
00:16:03.396  VHOST_CONFIG: (/var/tmp/sma-0) binding succeeded
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) new vhost user connection is 60
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) new device, handle is 0
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_PROTOCOL_FEATURES
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_PROTOCOL_FEATURES
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Vhost-user protocol features: 0x11ebf
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_QUEUE_NUM
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_BACKEND_REQ_FD
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_OWNER
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:254
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:255
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_CONFIG
00:16:03.656   22:43:04 sma.sma_vhost -- sma/vhost_blk.sh@192 -- # devid0=virtio_blk:sma-0
00:16:03.656    22:43:04 sma.sma_vhost -- sma/vhost_blk.sh@194 -- # rpc_cmd vhost_get_controllers
00:16:03.656    22:43:04 sma.sma_vhost -- sma/vhost_blk.sh@194 -- # jq -r '. | length'
00:16:03.656    22:43:04 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:03.656    22:43:04 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:03.656    22:43:04 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150007646
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000008):
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_INFLIGHT_FD
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd num_queues: 2
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd queue_size: 128
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_size: 4224
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_offset: 0
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) send inflight fd: 59
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_INFLIGHT_FD
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_size: 4224
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_offset: 0
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd num_queues: 2
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd queue_size: 128
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd fd: 256
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd pervq_inflight_size: 2112
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:59
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:254
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150007646
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_MEM_TABLE
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) guest memory region size: 0x40000000
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest physical addr: 0x0
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest virtual  addr: 0x7f03c7e00000
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) 	 host  virtual  addr: 0x7ff98b600000
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap addr : 0x7ff98b600000
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap size : 0x40000000
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap align: 0x200000
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap off  : 0x0
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 last_used_idx:0 last_avail_idx:0.
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:0 file:257
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:16:03.656   22:43:04 sma.sma_vhost -- sma/vhost_blk.sh@194 -- # [[ 1 -eq 1 ]]
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 last_used_idx:0 last_avail_idx:0.
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:1 file:258
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 0
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 1
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x0000000f):
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 1
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 1
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 1
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:16:03.656  VHOST_CONFIG: (/var/tmp/sma-0) virtio is now ready for processing.
00:16:03.656    22:43:04 sma.sma_vhost -- sma/vhost_blk.sh@195 -- # rpc_cmd vhost_get_controllers
00:16:03.656    22:43:04 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:03.656    22:43:04 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:03.656    22:43:04 sma.sma_vhost -- sma/vhost_blk.sh@195 -- # jq -r '.[].backend_specific.block.bdev'
00:16:03.915    22:43:04 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:03.915   22:43:04 sma.sma_vhost -- sma/vhost_blk.sh@195 -- # bdev=98ee9f42-c894-4da5-b6f8-b7649d272143
00:16:03.915    22:43:04 sma.sma_vhost -- sma/vhost_blk.sh@197 -- # rpc_cmd bdev_get_bdevs
00:16:03.915    22:43:04 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:03.915    22:43:04 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:03.915    22:43:04 sma.sma_vhost -- sma/vhost_blk.sh@197 -- # jq -r '.[] | select(.product_name == "crypto")'
00:16:03.915    22:43:04 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:03.915   22:43:04 sma.sma_vhost -- sma/vhost_blk.sh@197 -- # crypto_bdev='{
00:16:03.915    "name": "98ee9f42-c894-4da5-b6f8-b7649d272143",
00:16:03.915    "aliases": [
00:16:03.915      "ef46e428-ba08-59b3-9141-5ca0d4322406"
00:16:03.915    ],
00:16:03.915    "product_name": "crypto",
00:16:03.915    "block_size": 4096,
00:16:03.915    "num_blocks": 8192,
00:16:03.915    "uuid": "ef46e428-ba08-59b3-9141-5ca0d4322406",
00:16:03.915    "assigned_rate_limits": {
00:16:03.915      "rw_ios_per_sec": 0,
00:16:03.915      "rw_mbytes_per_sec": 0,
00:16:03.915      "r_mbytes_per_sec": 0,
00:16:03.915      "w_mbytes_per_sec": 0
00:16:03.915    },
00:16:03.915    "claimed": false,
00:16:03.915    "zoned": false,
00:16:03.915    "supported_io_types": {
00:16:03.915      "read": true,
00:16:03.915      "write": true,
00:16:03.915      "unmap": true,
00:16:03.915      "flush": true,
00:16:03.915      "reset": true,
00:16:03.915      "nvme_admin": false,
00:16:03.915      "nvme_io": false,
00:16:03.915      "nvme_io_md": false,
00:16:03.915      "write_zeroes": true,
00:16:03.915      "zcopy": false,
00:16:03.915      "get_zone_info": false,
00:16:03.915      "zone_management": false,
00:16:03.915      "zone_append": false,
00:16:03.915      "compare": false,
00:16:03.915      "compare_and_write": false,
00:16:03.915      "abort": false,
00:16:03.915      "seek_hole": false,
00:16:03.915      "seek_data": false,
00:16:03.915      "copy": false,
00:16:03.915      "nvme_iov_md": false
00:16:03.915    },
00:16:03.915    "memory_domains": [
00:16:03.915      {
00:16:03.915        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:16:03.915        "dma_device_type": 2
00:16:03.915      }
00:16:03.915    ],
00:16:03.915    "driver_specific": {
00:16:03.915      "crypto": {
00:16:03.915        "base_bdev_name": "malloc0",
00:16:03.915        "name": "98ee9f42-c894-4da5-b6f8-b7649d272143",
00:16:03.915        "key_name": "98ee9f42-c894-4da5-b6f8-b7649d272143_AES_CBC"
00:16:03.915      }
00:16:03.915    }
00:16:03.915  }'
00:16:03.915    22:43:04 sma.sma_vhost -- sma/vhost_blk.sh@198 -- # jq -r .driver_specific.crypto.name
00:16:03.915   22:43:04 sma.sma_vhost -- sma/vhost_blk.sh@198 -- # [[ 98ee9f42-c894-4da5-b6f8-b7649d272143 == \9\8\e\e\9\f\4\2\-\c\8\9\4\-\4\d\a\5\-\b\6\f\8\-\b\7\6\4\9\d\2\7\2\1\4\3 ]]
00:16:03.915    22:43:04 sma.sma_vhost -- sma/vhost_blk.sh@199 -- # jq -r .driver_specific.crypto.key_name
00:16:03.915   22:43:04 sma.sma_vhost -- sma/vhost_blk.sh@199 -- # key_name=98ee9f42-c894-4da5-b6f8-b7649d272143_AES_CBC
00:16:03.915    22:43:04 sma.sma_vhost -- sma/vhost_blk.sh@200 -- # rpc_cmd accel_crypto_keys_get -k 98ee9f42-c894-4da5-b6f8-b7649d272143_AES_CBC
00:16:03.915    22:43:04 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:03.915    22:43:04 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:03.915    22:43:04 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:03.915   22:43:04 sma.sma_vhost -- sma/vhost_blk.sh@200 -- # key_obj='[
00:16:03.915  {
00:16:03.915  "name": "98ee9f42-c894-4da5-b6f8-b7649d272143_AES_CBC",
00:16:03.915  "cipher": "AES_CBC",
00:16:03.915  "key": "1234567890abcdef1234567890abcdef"
00:16:03.915  }
00:16:03.915  ]'
00:16:03.915    22:43:04 sma.sma_vhost -- sma/vhost_blk.sh@201 -- # jq -r '.[0].key'
00:16:03.915   22:43:04 sma.sma_vhost -- sma/vhost_blk.sh@201 -- # [[ 1234567890abcdef1234567890abcdef == \1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f\1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f ]]
00:16:03.915    22:43:04 sma.sma_vhost -- sma/vhost_blk.sh@202 -- # jq -r '.[0].cipher'
00:16:03.915   22:43:04 sma.sma_vhost -- sma/vhost_blk.sh@202 -- # [[ AES_CBC == \A\E\S\_\C\B\C ]]
00:16:03.915   22:43:04 sma.sma_vhost -- sma/vhost_blk.sh@205 -- # delete_device virtio_blk:sma-0
00:16:03.915   22:43:04 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:04.174  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:04.174  I0000 00:00:1733866984.878108  168172 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:04.174  I0000 00:00:1733866984.879870  168172 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:04.174  I0000 00:00:1733866984.881197  168173 subchannel.cc:806] subchannel 0x564931261de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x564931101840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x56493127bda0, grpc.internal.client_channel_call_destination=0x7f07d2ff9390, grpc.internal.event_engine=0x564930f80030, grpc.internal.security_connector=0x5649312132b0, grpc.internal.subchannel_pool=0x5649310d0690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x564930ded9a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:04.880690126+01:00"}), backing off for 999 ms
00:16:04.174  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:16:04.174  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000000):
00:16:04.174  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 1
00:16:04.174  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:16:04.174  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:16:04.174  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 0
00:16:04.174  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:16:04.174  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:16:04.174  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:16:04.174  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:16:04.174  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 0
00:16:04.174  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:16:04.174  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 1
00:16:04.174  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:16:04.174  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 file:36
00:16:04.174  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:16:04.174  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 file:0
00:16:04.434  VHOST_CONFIG: (/var/tmp/sma-0) vhost peer closed
00:16:04.434  {}
00:16:04.434    22:43:05 sma.sma_vhost -- sma/vhost_blk.sh@206 -- # jq -r length
00:16:04.434    22:43:05 sma.sma_vhost -- sma/vhost_blk.sh@206 -- # rpc_cmd bdev_get_bdevs
00:16:04.434    22:43:05 sma.sma_vhost -- sma/vhost_blk.sh@206 -- # jq -r '.[] | select(.product_name == "crypto")'
00:16:04.434    22:43:05 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:04.434    22:43:05 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:04.434    22:43:05 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:04.434   22:43:05 sma.sma_vhost -- sma/vhost_blk.sh@206 -- # [[ '' -eq 0 ]]
00:16:04.434   22:43:05 sma.sma_vhost -- sma/vhost_blk.sh@209 -- # device_vhost=2
00:16:04.434    22:43:05 sma.sma_vhost -- sma/vhost_blk.sh@210 -- # rpc_cmd bdev_get_bdevs -b null0
00:16:04.434    22:43:05 sma.sma_vhost -- sma/vhost_blk.sh@210 -- # jq -r '.[].uuid'
00:16:04.434    22:43:05 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:04.434    22:43:05 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:04.434    22:43:05 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:04.434   22:43:05 sma.sma_vhost -- sma/vhost_blk.sh@210 -- # uuid=017f1e14-f810-4b5a-9f4e-1ded894ebe7f
00:16:04.434    22:43:05 sma.sma_vhost -- sma/vhost_blk.sh@211 -- # create_device 0 017f1e14-f810-4b5a-9f4e-1ded894ebe7f
00:16:04.434    22:43:05 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:04.434    22:43:05 sma.sma_vhost -- sma/vhost_blk.sh@211 -- # jq -r .handle
00:16:04.434     22:43:05 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 017f1e14-f810-4b5a-9f4e-1ded894ebe7f
00:16:04.434     22:43:05 sma.sma_vhost -- sma/common.sh@20 -- # python
00:16:04.698  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:04.698  I0000 00:00:1733866985.449431  168383 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:04.698  I0000 00:00:1733866985.451149  168383 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:04.698  I0000 00:00:1733866985.452601  168404 subchannel.cc:806] subchannel 0x5654809fede0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x56548089e840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x565480a18da0, grpc.internal.client_channel_call_destination=0x7f2fcfa25390, grpc.internal.event_engine=0x56548071d030, grpc.internal.security_connector=0x5654809b02b0, grpc.internal.subchannel_pool=0x56548086d690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x56548058a9a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:05.45213021+01:00"}), backing off for 1000 ms
00:16:04.956  VHOST_CONFIG: (/var/tmp/sma-0) vhost-user server: socket created, fd: 252
00:16:04.956  VHOST_CONFIG: (/var/tmp/sma-0) binding succeeded
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) new vhost user connection is 58
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) new device, handle is 0
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_PROTOCOL_FEATURES
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_PROTOCOL_FEATURES
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Vhost-user protocol features: 0x11ebf
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_QUEUE_NUM
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_BACKEND_REQ_FD
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_OWNER
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:254
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:255
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_CONFIG
00:16:05.523   22:43:06 sma.sma_vhost -- sma/vhost_blk.sh@211 -- # device=virtio_blk:sma-0
00:16:05.523    22:43:06 sma.sma_vhost -- sma/vhost_blk.sh@214 -- # jq --sort-keys
00:16:05.523   22:43:06 sma.sma_vhost -- sma/vhost_blk.sh@214 -- # diff /dev/fd/62 /dev/fd/61
00:16:05.523    22:43:06 sma.sma_vhost -- sma/vhost_blk.sh@214 -- # get_qos_caps 2
00:16:05.523    22:43:06 sma.sma_vhost -- sma/vhost_blk.sh@214 -- # jq --sort-keys
00:16:05.523    22:43:06 sma.sma_vhost -- sma/common.sh@45 -- # local rootdir
00:16:05.523     22:43:06 sma.sma_vhost -- sma/common.sh@47 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:16:05.523    22:43:06 sma.sma_vhost -- sma/common.sh@47 -- # rootdir=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../..
00:16:05.523    22:43:06 sma.sma_vhost -- sma/common.sh@49 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150005446
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000008):
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_INFLIGHT_FD
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd num_queues: 2
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd queue_size: 128
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_size: 4224
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_offset: 0
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) send inflight fd: 60
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_INFLIGHT_FD
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_size: 4224
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_offset: 0
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd num_queues: 2
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd queue_size: 128
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd fd: 256
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd pervq_inflight_size: 2112
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:60
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:254
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150005446
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_MEM_TABLE
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) guest memory region size: 0x40000000
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest physical addr: 0x0
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest virtual  addr: 0x7f03c7e00000
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) 	 host  virtual  addr: 0x7ff94b400000
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap addr : 0x7ff94b400000
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap size : 0x40000000
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap align: 0x200000
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap off  : 0x0
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 last_used_idx:0 last_avail_idx:0.
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:0 file:257
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 last_used_idx:0 last_avail_idx:0.
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:1 file:258
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 0
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:16:05.523  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 1
00:16:05.524  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:16:05.524  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:16:05.524  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x0000000f):
00:16:05.524  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:16:05.524  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 1
00:16:05.524  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 1
00:16:05.524  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:16:05.524  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 1
00:16:05.524  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:16:05.524  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:16:05.524  VHOST_CONFIG: (/var/tmp/sma-0) virtio is now ready for processing.
00:16:05.524  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:05.524  I0000 00:00:1733866986.277536  168439 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:05.524  I0000 00:00:1733866986.282357  168439 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:05.524  I0000 00:00:1733866986.283730  168550 subchannel.cc:806] subchannel 0x557310ddafa0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x557310b19cc0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x557310c6c9f0, grpc.internal.client_channel_call_destination=0x7f8863a8c390, grpc.internal.event_engine=0x557310dbfec0, grpc.internal.security_connector=0x557310be5030, grpc.internal.subchannel_pool=0x557310dab5b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x557310b81320, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:06.283251124+01:00"}), backing off for 1000 ms
00:16:05.782   22:43:06 sma.sma_vhost -- sma/vhost_blk.sh@233 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:05.782    22:43:06 sma.sma_vhost -- sma/vhost_blk.sh@233 -- # uuid2base64 017f1e14-f810-4b5a-9f4e-1ded894ebe7f
00:16:05.782    22:43:06 sma.sma_vhost -- sma/common.sh@20 -- # python
00:16:05.782  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:05.782  I0000 00:00:1733866986.540464  168595 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:05.782  I0000 00:00:1733866986.545173  168595 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:05.782  I0000 00:00:1733866986.546375  168664 subchannel.cc:806] subchannel 0x55aba81e3de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55aba8083840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55aba81fdda0, grpc.internal.client_channel_call_destination=0x7fb5ec59f390, grpc.internal.event_engine=0x55aba8070490, grpc.internal.security_connector=0x55aba81952b0, grpc.internal.subchannel_pool=0x55aba8052690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55aba7d6f9a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:06.545970533+01:00"}), backing off for 1000 ms
00:16:06.041  {}
00:16:06.041   22:43:06 sma.sma_vhost -- sma/vhost_blk.sh@252 -- # diff /dev/fd/62 /dev/fd/61
00:16:06.041    22:43:06 sma.sma_vhost -- sma/vhost_blk.sh@252 -- # rpc_cmd bdev_get_bdevs -b 017f1e14-f810-4b5a-9f4e-1ded894ebe7f
00:16:06.041    22:43:06 sma.sma_vhost -- sma/vhost_blk.sh@252 -- # jq --sort-keys '.[].assigned_rate_limits'
00:16:06.042    22:43:06 sma.sma_vhost -- sma/vhost_blk.sh@252 -- # jq --sort-keys
00:16:06.042    22:43:06 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:06.042    22:43:06 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:06.042    22:43:06 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:06.042   22:43:06 sma.sma_vhost -- sma/vhost_blk.sh@264 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:06.301  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:06.301  I0000 00:00:1733866986.843928  168692 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:06.301  I0000 00:00:1733866986.845411  168692 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:06.301  I0000 00:00:1733866986.846762  168694 subchannel.cc:806] subchannel 0x5578ef621de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5578ef4c1840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5578ef63bda0, grpc.internal.client_channel_call_destination=0x7f12a5f15390, grpc.internal.event_engine=0x5578ef340030, grpc.internal.security_connector=0x5578ef4c9770, grpc.internal.subchannel_pool=0x5578ef490690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5578ef1ad9a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:06.846286795+01:00"}), backing off for 1000 ms
00:16:06.301  {}
00:16:06.301    22:43:06 sma.sma_vhost -- sma/vhost_blk.sh@283 -- # rpc_cmd bdev_get_bdevs -b 017f1e14-f810-4b5a-9f4e-1ded894ebe7f
00:16:06.301   22:43:06 sma.sma_vhost -- sma/vhost_blk.sh@283 -- # diff /dev/fd/62 /dev/fd/61
00:16:06.301    22:43:06 sma.sma_vhost -- sma/vhost_blk.sh@283 -- # jq --sort-keys
00:16:06.301    22:43:06 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:06.301    22:43:06 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:06.301    22:43:06 sma.sma_vhost -- sma/vhost_blk.sh@283 -- # jq --sort-keys '.[].assigned_rate_limits'
00:16:06.301    22:43:06 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:06.301   22:43:06 sma.sma_vhost -- sma/vhost_blk.sh@295 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:06.301     22:43:06 sma.sma_vhost -- sma/vhost_blk.sh@295 -- # uuidgen
00:16:06.301    22:43:06 sma.sma_vhost -- sma/vhost_blk.sh@295 -- # uuid2base64 a6b860cc-f7c6-4d65-8add-8e0c806cb92c
00:16:06.301    22:43:06 sma.sma_vhost -- sma/common.sh@20 -- # python
00:16:06.301   22:43:06 sma.sma_vhost -- common/autotest_common.sh@652 -- # local es=0
00:16:06.301   22:43:06 sma.sma_vhost -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:06.301   22:43:06 sma.sma_vhost -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:06.301   22:43:06 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:06.301    22:43:06 sma.sma_vhost -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:06.301   22:43:06 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:06.301    22:43:06 sma.sma_vhost -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:06.301   22:43:06 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:06.301   22:43:06 sma.sma_vhost -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:06.301   22:43:06 sma.sma_vhost -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:16:06.301   22:43:06 sma.sma_vhost -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:06.560  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:06.560  I0000 00:00:1733866987.175456  168726 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:06.560  I0000 00:00:1733866987.177131  168726 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:06.560  I0000 00:00:1733866987.178441  168733 subchannel.cc:806] subchannel 0x55eb82147de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55eb81fe7840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55eb82161da0, grpc.internal.client_channel_call_destination=0x7f4bf9272390, grpc.internal.event_engine=0x55eb81fd4490, grpc.internal.security_connector=0x55eb820f92b0, grpc.internal.subchannel_pool=0x55eb81fb6690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55eb81cd39a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:07.17800342+01:00"}), backing off for 1000 ms
00:16:06.560  [2024-12-10 22:43:07.212642] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: a6b860cc-f7c6-4d65-8add-8e0c806cb92c
00:16:06.560  Traceback (most recent call last):
00:16:06.560    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:16:06.560      main(sys.argv[1:])
00:16:06.560    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:16:06.560      result = client.call(request['method'], request.get('params', {}))
00:16:06.560               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:06.560    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:16:06.560      response = func(request=json_format.ParseDict(params, input()))
00:16:06.560                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:06.560    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:16:06.560      return _end_unary_response_blocking(state, call, False, None)
00:16:06.560             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:06.560    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:16:06.560      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:16:06.560      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:06.560  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:16:06.560  	status = StatusCode.INVALID_ARGUMENT
00:16:06.560  	details = "Specified volume is not attached to the device"
00:16:06.560  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-12-10T22:43:07.21717144+01:00", grpc_status:3, grpc_message:"Specified volume is not attached to the device"}"
00:16:06.560  >
00:16:06.560   22:43:07 sma.sma_vhost -- common/autotest_common.sh@655 -- # es=1
00:16:06.560   22:43:07 sma.sma_vhost -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:06.560   22:43:07 sma.sma_vhost -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:06.560   22:43:07 sma.sma_vhost -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:06.560   22:43:07 sma.sma_vhost -- sma/vhost_blk.sh@314 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:06.560    22:43:07 sma.sma_vhost -- sma/vhost_blk.sh@314 -- # base64
00:16:06.560   22:43:07 sma.sma_vhost -- common/autotest_common.sh@652 -- # local es=0
00:16:06.560   22:43:07 sma.sma_vhost -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:06.560   22:43:07 sma.sma_vhost -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:06.560   22:43:07 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:06.560    22:43:07 sma.sma_vhost -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:06.560   22:43:07 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:06.560    22:43:07 sma.sma_vhost -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:06.560   22:43:07 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:06.560   22:43:07 sma.sma_vhost -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:06.560   22:43:07 sma.sma_vhost -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:16:06.560   22:43:07 sma.sma_vhost -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:06.819  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:06.819  I0000 00:00:1733866987.438562  168757 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:06.819  I0000 00:00:1733866987.440012  168757 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:06.819  I0000 00:00:1733866987.441465  168820 subchannel.cc:806] subchannel 0x562f069aade0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x562f0684a840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x562f069c4da0, grpc.internal.client_channel_call_destination=0x7fbfaae43390, grpc.internal.event_engine=0x562f066c9030, grpc.internal.security_connector=0x562f06852770, grpc.internal.subchannel_pool=0x562f06819690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x562f065369a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:07.440884895+01:00"}), backing off for 999 ms
00:16:06.819  Traceback (most recent call last):
00:16:06.819    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:16:06.819      main(sys.argv[1:])
00:16:06.819    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:16:06.819      result = client.call(request['method'], request.get('params', {}))
00:16:06.819               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:06.819    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:16:06.819      response = func(request=json_format.ParseDict(params, input()))
00:16:06.819                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:06.819    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:16:06.819      return _end_unary_response_blocking(state, call, False, None)
00:16:06.819             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:06.819    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:16:06.819      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:16:06.819      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:06.819  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:16:06.819  	status = StatusCode.INVALID_ARGUMENT
00:16:06.819  	details = "Invalid volume uuid"
00:16:06.819  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Invalid volume uuid", grpc_status:3, created_time:"2024-12-10T22:43:07.449643674+01:00"}"
00:16:06.819  >
00:16:06.819   22:43:07 sma.sma_vhost -- common/autotest_common.sh@655 -- # es=1
00:16:06.819   22:43:07 sma.sma_vhost -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:06.820   22:43:07 sma.sma_vhost -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:06.820   22:43:07 sma.sma_vhost -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:06.820   22:43:07 sma.sma_vhost -- sma/vhost_blk.sh@333 -- # diff /dev/fd/62 /dev/fd/61
00:16:06.820    22:43:07 sma.sma_vhost -- sma/vhost_blk.sh@333 -- # jq --sort-keys
00:16:06.820    22:43:07 sma.sma_vhost -- sma/vhost_blk.sh@333 -- # rpc_cmd bdev_get_bdevs -b 017f1e14-f810-4b5a-9f4e-1ded894ebe7f
00:16:06.820    22:43:07 sma.sma_vhost -- sma/vhost_blk.sh@333 -- # jq --sort-keys '.[].assigned_rate_limits'
00:16:06.820    22:43:07 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:06.820    22:43:07 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:06.820    22:43:07 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:06.820   22:43:07 sma.sma_vhost -- sma/vhost_blk.sh@344 -- # delete_device virtio_blk:sma-0
00:16:06.820   22:43:07 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:07.078  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:07.078  I0000 00:00:1733866987.717765  168949 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:07.078  I0000 00:00:1733866987.719383  168949 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:07.079  I0000 00:00:1733866987.720792  168983 subchannel.cc:806] subchannel 0x55eacea61de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55eace901840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55eacea7bda0, grpc.internal.client_channel_call_destination=0x7f88960e8390, grpc.internal.event_engine=0x55eace780030, grpc.internal.security_connector=0x55eacea132b0, grpc.internal.subchannel_pool=0x55eace8d0690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55eace5ed9a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:07.720194925+01:00"}), backing off for 1000 ms
00:16:07.647  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:16:07.647  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000000):
00:16:07.647  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 1
00:16:07.647  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:16:07.647  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:16:07.647  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 0
00:16:07.647  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:16:07.647  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:16:07.647  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:16:07.647  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:16:07.647  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 0
00:16:07.647  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:16:07.647  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 1
00:16:07.647  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:16:07.647  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 file:49
00:16:07.647  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:16:07.647  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 file:1
00:16:07.647  VHOST_CONFIG: (/var/tmp/sma-0) vhost peer closed
00:16:07.647  {}
00:16:07.647   22:43:08 sma.sma_vhost -- sma/vhost_blk.sh@346 -- # cleanup
00:16:07.647   22:43:08 sma.sma_vhost -- sma/vhost_blk.sh@14 -- # killprocess 165438
00:16:07.647   22:43:08 sma.sma_vhost -- common/autotest_common.sh@954 -- # '[' -z 165438 ']'
00:16:07.647   22:43:08 sma.sma_vhost -- common/autotest_common.sh@958 -- # kill -0 165438
00:16:07.647    22:43:08 sma.sma_vhost -- common/autotest_common.sh@959 -- # uname
00:16:07.647   22:43:08 sma.sma_vhost -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:16:07.647    22:43:08 sma.sma_vhost -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 165438
00:16:07.647   22:43:08 sma.sma_vhost -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:16:07.647   22:43:08 sma.sma_vhost -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:16:07.647   22:43:08 sma.sma_vhost -- common/autotest_common.sh@972 -- # echo 'killing process with pid 165438'
00:16:07.647  killing process with pid 165438
00:16:07.647   22:43:08 sma.sma_vhost -- common/autotest_common.sh@973 -- # kill 165438
00:16:07.647   22:43:08 sma.sma_vhost -- common/autotest_common.sh@978 -- # wait 165438
00:16:09.027   22:43:09 sma.sma_vhost -- sma/vhost_blk.sh@15 -- # killprocess 165649
00:16:09.027   22:43:09 sma.sma_vhost -- common/autotest_common.sh@954 -- # '[' -z 165649 ']'
00:16:09.027   22:43:09 sma.sma_vhost -- common/autotest_common.sh@958 -- # kill -0 165649
00:16:09.027    22:43:09 sma.sma_vhost -- common/autotest_common.sh@959 -- # uname
00:16:09.027   22:43:09 sma.sma_vhost -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:16:09.027    22:43:09 sma.sma_vhost -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 165649
00:16:09.027   22:43:09 sma.sma_vhost -- common/autotest_common.sh@960 -- # process_name=python3
00:16:09.027   22:43:09 sma.sma_vhost -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:16:09.027   22:43:09 sma.sma_vhost -- common/autotest_common.sh@972 -- # echo 'killing process with pid 165649'
00:16:09.027  killing process with pid 165649
00:16:09.027   22:43:09 sma.sma_vhost -- common/autotest_common.sh@973 -- # kill 165649
00:16:09.027   22:43:09 sma.sma_vhost -- common/autotest_common.sh@978 -- # wait 165649
00:16:09.027   22:43:09 sma.sma_vhost -- sma/vhost_blk.sh@16 -- # vm_kill_all
00:16:09.027   22:43:09 sma.sma_vhost -- vhost/common.sh@476 -- # local vm
00:16:09.027    22:43:09 sma.sma_vhost -- vhost/common.sh@477 -- # vm_list_all
00:16:09.027    22:43:09 sma.sma_vhost -- vhost/common.sh@466 -- # vms=()
00:16:09.027    22:43:09 sma.sma_vhost -- vhost/common.sh@466 -- # local vms
00:16:09.027    22:43:09 sma.sma_vhost -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:16:09.027    22:43:09 sma.sma_vhost -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:16:09.027    22:43:09 sma.sma_vhost -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/0
00:16:09.027   22:43:09 sma.sma_vhost -- vhost/common.sh@477 -- # for vm in $(vm_list_all)
00:16:09.027   22:43:09 sma.sma_vhost -- vhost/common.sh@478 -- # vm_kill 0
00:16:09.027   22:43:09 sma.sma_vhost -- vhost/common.sh@442 -- # vm_num_is_valid 0
00:16:09.027   22:43:09 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:16:09.027   22:43:09 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:16:09.027   22:43:09 sma.sma_vhost -- vhost/common.sh@443 -- # local vm_dir=/root/vhost_test/vms/0
00:16:09.027   22:43:09 sma.sma_vhost -- vhost/common.sh@445 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:16:09.027   22:43:09 sma.sma_vhost -- vhost/common.sh@449 -- # local vm_pid
00:16:09.027    22:43:09 sma.sma_vhost -- vhost/common.sh@450 -- # cat /root/vhost_test/vms/0/qemu.pid
00:16:09.027   22:43:09 sma.sma_vhost -- vhost/common.sh@450 -- # vm_pid=161463
00:16:09.027   22:43:09 sma.sma_vhost -- vhost/common.sh@452 -- # notice 'Killing virtual machine /root/vhost_test/vms/0 (pid=161463)'
00:16:09.027   22:43:09 sma.sma_vhost -- vhost/common.sh@94 -- # message INFO 'Killing virtual machine /root/vhost_test/vms/0 (pid=161463)'
00:16:09.027   22:43:09 sma.sma_vhost -- vhost/common.sh@60 -- # local verbose_out
00:16:09.027   22:43:09 sma.sma_vhost -- vhost/common.sh@61 -- # false
00:16:09.027   22:43:09 sma.sma_vhost -- vhost/common.sh@62 -- # verbose_out=
00:16:09.027   22:43:09 sma.sma_vhost -- vhost/common.sh@69 -- # local msg_type=INFO
00:16:09.027   22:43:09 sma.sma_vhost -- vhost/common.sh@70 -- # shift
00:16:09.027   22:43:09 sma.sma_vhost -- vhost/common.sh@71 -- # echo -e 'INFO: Killing virtual machine /root/vhost_test/vms/0 (pid=161463)'
00:16:09.027  INFO: Killing virtual machine /root/vhost_test/vms/0 (pid=161463)
00:16:09.027   22:43:09 sma.sma_vhost -- vhost/common.sh@454 -- # /bin/kill 161463
00:16:09.027   22:43:09 sma.sma_vhost -- vhost/common.sh@455 -- # notice 'process 161463 killed'
00:16:09.027   22:43:09 sma.sma_vhost -- vhost/common.sh@94 -- # message INFO 'process 161463 killed'
00:16:09.027   22:43:09 sma.sma_vhost -- vhost/common.sh@60 -- # local verbose_out
00:16:09.027   22:43:09 sma.sma_vhost -- vhost/common.sh@61 -- # false
00:16:09.027   22:43:09 sma.sma_vhost -- vhost/common.sh@62 -- # verbose_out=
00:16:09.027   22:43:09 sma.sma_vhost -- vhost/common.sh@69 -- # local msg_type=INFO
00:16:09.027   22:43:09 sma.sma_vhost -- vhost/common.sh@70 -- # shift
00:16:09.027   22:43:09 sma.sma_vhost -- vhost/common.sh@71 -- # echo -e 'INFO: process 161463 killed'
00:16:09.027  INFO: process 161463 killed
00:16:09.027   22:43:09 sma.sma_vhost -- vhost/common.sh@456 -- # rm -rf /root/vhost_test/vms/0
00:16:09.027   22:43:09 sma.sma_vhost -- vhost/common.sh@481 -- # rm -rf /root/vhost_test/vms
00:16:09.027   22:43:09 sma.sma_vhost -- sma/vhost_blk.sh@347 -- # trap - SIGINT SIGTERM EXIT
00:16:09.027  
00:16:09.027  real	0m41.639s
00:16:09.027  user	0m42.285s
00:16:09.027  sys	0m2.394s
00:16:09.027   22:43:09 sma.sma_vhost -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:09.027   22:43:09 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:09.027  ************************************
00:16:09.027  END TEST sma_vhost
00:16:09.027  ************************************
00:16:09.027   22:43:09 sma -- sma/sma.sh@16 -- # run_test sma_crypto /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/crypto.sh
00:16:09.027   22:43:09 sma -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:16:09.027   22:43:09 sma -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:09.027   22:43:09 sma -- common/autotest_common.sh@10 -- # set +x
00:16:09.027  ************************************
00:16:09.027  START TEST sma_crypto
00:16:09.027  ************************************
00:16:09.027   22:43:09 sma.sma_crypto -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/crypto.sh
00:16:09.027  * Looking for test storage...
00:16:09.027  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:16:09.028    22:43:09 sma.sma_crypto -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:16:09.028     22:43:09 sma.sma_crypto -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:16:09.028     22:43:09 sma.sma_crypto -- common/autotest_common.sh@1711 -- # lcov --version
00:16:09.028    22:43:09 sma.sma_crypto -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:16:09.028    22:43:09 sma.sma_crypto -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:16:09.028    22:43:09 sma.sma_crypto -- scripts/common.sh@333 -- # local ver1 ver1_l
00:16:09.028    22:43:09 sma.sma_crypto -- scripts/common.sh@334 -- # local ver2 ver2_l
00:16:09.028    22:43:09 sma.sma_crypto -- scripts/common.sh@336 -- # IFS=.-:
00:16:09.028    22:43:09 sma.sma_crypto -- scripts/common.sh@336 -- # read -ra ver1
00:16:09.028    22:43:09 sma.sma_crypto -- scripts/common.sh@337 -- # IFS=.-:
00:16:09.028    22:43:09 sma.sma_crypto -- scripts/common.sh@337 -- # read -ra ver2
00:16:09.028    22:43:09 sma.sma_crypto -- scripts/common.sh@338 -- # local 'op=<'
00:16:09.028    22:43:09 sma.sma_crypto -- scripts/common.sh@340 -- # ver1_l=2
00:16:09.028    22:43:09 sma.sma_crypto -- scripts/common.sh@341 -- # ver2_l=1
00:16:09.028    22:43:09 sma.sma_crypto -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:16:09.028    22:43:09 sma.sma_crypto -- scripts/common.sh@344 -- # case "$op" in
00:16:09.028    22:43:09 sma.sma_crypto -- scripts/common.sh@345 -- # : 1
00:16:09.028    22:43:09 sma.sma_crypto -- scripts/common.sh@364 -- # (( v = 0 ))
00:16:09.028    22:43:09 sma.sma_crypto -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:16:09.028     22:43:09 sma.sma_crypto -- scripts/common.sh@365 -- # decimal 1
00:16:09.028     22:43:09 sma.sma_crypto -- scripts/common.sh@353 -- # local d=1
00:16:09.028     22:43:09 sma.sma_crypto -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:16:09.028     22:43:09 sma.sma_crypto -- scripts/common.sh@355 -- # echo 1
00:16:09.028    22:43:09 sma.sma_crypto -- scripts/common.sh@365 -- # ver1[v]=1
00:16:09.028     22:43:09 sma.sma_crypto -- scripts/common.sh@366 -- # decimal 2
00:16:09.028     22:43:09 sma.sma_crypto -- scripts/common.sh@353 -- # local d=2
00:16:09.028     22:43:09 sma.sma_crypto -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:16:09.028     22:43:09 sma.sma_crypto -- scripts/common.sh@355 -- # echo 2
00:16:09.028    22:43:09 sma.sma_crypto -- scripts/common.sh@366 -- # ver2[v]=2
00:16:09.028    22:43:09 sma.sma_crypto -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:16:09.028    22:43:09 sma.sma_crypto -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:16:09.028    22:43:09 sma.sma_crypto -- scripts/common.sh@368 -- # return 0
00:16:09.028    22:43:09 sma.sma_crypto -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:16:09.028    22:43:09 sma.sma_crypto -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:16:09.028  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:09.028  		--rc genhtml_branch_coverage=1
00:16:09.028  		--rc genhtml_function_coverage=1
00:16:09.028  		--rc genhtml_legend=1
00:16:09.028  		--rc geninfo_all_blocks=1
00:16:09.028  		--rc geninfo_unexecuted_blocks=1
00:16:09.028  		
00:16:09.028  		'
00:16:09.028    22:43:09 sma.sma_crypto -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:16:09.028  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:09.028  		--rc genhtml_branch_coverage=1
00:16:09.028  		--rc genhtml_function_coverage=1
00:16:09.028  		--rc genhtml_legend=1
00:16:09.028  		--rc geninfo_all_blocks=1
00:16:09.028  		--rc geninfo_unexecuted_blocks=1
00:16:09.028  		
00:16:09.028  		'
00:16:09.028    22:43:09 sma.sma_crypto -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:16:09.028  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:09.028  		--rc genhtml_branch_coverage=1
00:16:09.028  		--rc genhtml_function_coverage=1
00:16:09.028  		--rc genhtml_legend=1
00:16:09.028  		--rc geninfo_all_blocks=1
00:16:09.028  		--rc geninfo_unexecuted_blocks=1
00:16:09.028  		
00:16:09.028  		'
00:16:09.028    22:43:09 sma.sma_crypto -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:16:09.028  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:09.028  		--rc genhtml_branch_coverage=1
00:16:09.028  		--rc genhtml_function_coverage=1
00:16:09.028  		--rc genhtml_legend=1
00:16:09.028  		--rc geninfo_all_blocks=1
00:16:09.028  		--rc geninfo_unexecuted_blocks=1
00:16:09.028  		
00:16:09.028  		'
00:16:09.028   22:43:09 sma.sma_crypto -- sma/crypto.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:16:09.028   22:43:09 sma.sma_crypto -- sma/crypto.sh@13 -- # rpc_py=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:16:09.028   22:43:09 sma.sma_crypto -- sma/crypto.sh@14 -- # localnqn=nqn.2016-06.io.spdk:cnode0
00:16:09.028   22:43:09 sma.sma_crypto -- sma/crypto.sh@15 -- # tgtnqn=nqn.2016-06.io.spdk:tgt0
00:16:09.028   22:43:09 sma.sma_crypto -- sma/crypto.sh@16 -- # key0=1234567890abcdef1234567890abcdef
00:16:09.028   22:43:09 sma.sma_crypto -- sma/crypto.sh@17 -- # key1=deadbeefcafebabefeedbeefbabecafe
00:16:09.028   22:43:09 sma.sma_crypto -- sma/crypto.sh@18 -- # tgtsock=/var/tmp/spdk.sock2
00:16:09.028   22:43:09 sma.sma_crypto -- sma/crypto.sh@19 -- # discovery_port=8009
00:16:09.028   22:43:09 sma.sma_crypto -- sma/crypto.sh@145 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:16:09.028   22:43:09 sma.sma_crypto -- sma/crypto.sh@147 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --wait-for-rpc
00:16:09.028   22:43:09 sma.sma_crypto -- sma/crypto.sh@148 -- # hostpid=169288
00:16:09.028   22:43:09 sma.sma_crypto -- sma/crypto.sh@150 -- # waitforlisten 169288
00:16:09.028   22:43:09 sma.sma_crypto -- common/autotest_common.sh@835 -- # '[' -z 169288 ']'
00:16:09.028   22:43:09 sma.sma_crypto -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:16:09.028   22:43:09 sma.sma_crypto -- common/autotest_common.sh@840 -- # local max_retries=100
00:16:09.028   22:43:09 sma.sma_crypto -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:16:09.028  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:16:09.028   22:43:09 sma.sma_crypto -- common/autotest_common.sh@844 -- # xtrace_disable
00:16:09.028   22:43:09 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:16:09.028  [2024-12-10 22:43:09.775809] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:16:09.028  [2024-12-10 22:43:09.775927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169288 ]
00:16:09.288  EAL: No free 2048 kB hugepages reported on node 1
00:16:09.288  [2024-12-10 22:43:09.910539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:09.288  [2024-12-10 22:43:10.060142] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:16:09.856   22:43:10 sma.sma_crypto -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:16:09.856   22:43:10 sma.sma_crypto -- common/autotest_common.sh@868 -- # return 0
00:16:09.856   22:43:10 sma.sma_crypto -- sma/crypto.sh@153 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py dpdk_cryptodev_scan_accel_module
00:16:10.114   22:43:10 sma.sma_crypto -- sma/crypto.sh@154 -- # rpc_cmd dpdk_cryptodev_set_driver -d crypto_aesni_mb
00:16:10.114   22:43:10 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:10.114   22:43:10 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:16:10.114  [2024-12-10 22:43:10.858951] accel_dpdk_cryptodev.c: 224:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb
00:16:10.114   22:43:10 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:10.114   22:43:10 sma.sma_crypto -- sma/crypto.sh@155 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py accel_assign_opc -o encrypt -m dpdk_cryptodev
00:16:10.373  [2024-12-10 22:43:11.055468] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev
00:16:10.373   22:43:11 sma.sma_crypto -- sma/crypto.sh@156 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py accel_assign_opc -o decrypt -m dpdk_cryptodev
00:16:10.631  [2024-12-10 22:43:11.256004] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev
00:16:10.631   22:43:11 sma.sma_crypto -- sma/crypto.sh@157 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py framework_start_init
00:16:11.198  [2024-12-10 22:43:11.760370] accel_dpdk_cryptodev.c:1179:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 1
00:16:11.765   22:43:12 sma.sma_crypto -- sma/crypto.sh@159 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/spdk.sock2 -m 0x2
00:16:11.765   22:43:12 sma.sma_crypto -- sma/crypto.sh@160 -- # tgtpid=169908
00:16:11.765   22:43:12 sma.sma_crypto -- sma/crypto.sh@172 -- # smapid=169909
00:16:11.765   22:43:12 sma.sma_crypto -- sma/crypto.sh@175 -- # sma_waitforlisten
00:16:11.765   22:43:12 sma.sma_crypto -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:16:11.765   22:43:12 sma.sma_crypto -- sma/crypto.sh@162 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:16:11.765    22:43:12 sma.sma_crypto -- sma/crypto.sh@162 -- # cat
00:16:11.765   22:43:12 sma.sma_crypto -- sma/common.sh@8 -- # local sma_port=8080
00:16:11.765   22:43:12 sma.sma_crypto -- sma/common.sh@10 -- # (( i = 0 ))
00:16:11.765   22:43:12 sma.sma_crypto -- sma/common.sh@10 -- # (( i < 5 ))
00:16:11.765   22:43:12 sma.sma_crypto -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:16:12.024   22:43:12 sma.sma_crypto -- sma/common.sh@14 -- # sleep 1s
00:16:12.024  [2024-12-10 22:43:12.617821] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:16:12.024  [2024-12-10 22:43:12.617939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169908 ]
00:16:12.024  EAL: No free 2048 kB hugepages reported on node 1
00:16:12.024  [2024-12-10 22:43:12.744875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:12.024  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:12.024  I0000 00:00:1733866992.745952  169909 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:12.024  [2024-12-10 22:43:12.759818] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:16:12.283  [2024-12-10 22:43:12.885139] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1
00:16:12.850   22:43:13 sma.sma_crypto -- sma/common.sh@10 -- # (( i++ ))
00:16:12.850   22:43:13 sma.sma_crypto -- sma/common.sh@10 -- # (( i < 5 ))
00:16:12.850   22:43:13 sma.sma_crypto -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:16:12.850   22:43:13 sma.sma_crypto -- sma/common.sh@12 -- # return 0
00:16:12.850    22:43:13 sma.sma_crypto -- sma/crypto.sh@178 -- # uuidgen
00:16:12.850   22:43:13 sma.sma_crypto -- sma/crypto.sh@178 -- # uuid=5a790f10-948b-4632-a666-83fd8fe920c9
00:16:12.850   22:43:13 sma.sma_crypto -- sma/crypto.sh@179 -- # waitforlisten 169908 /var/tmp/spdk.sock2
00:16:12.850   22:43:13 sma.sma_crypto -- common/autotest_common.sh@835 -- # '[' -z 169908 ']'
00:16:12.850   22:43:13 sma.sma_crypto -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock2
00:16:12.850   22:43:13 sma.sma_crypto -- common/autotest_common.sh@840 -- # local max_retries=100
00:16:12.850   22:43:13 sma.sma_crypto -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock2...'
00:16:12.850  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock2...
00:16:12.850   22:43:13 sma.sma_crypto -- common/autotest_common.sh@844 -- # xtrace_disable
00:16:12.850   22:43:13 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:16:13.108   22:43:13 sma.sma_crypto -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:16:13.108   22:43:13 sma.sma_crypto -- common/autotest_common.sh@868 -- # return 0
00:16:13.108   22:43:13 sma.sma_crypto -- sma/crypto.sh@180 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock2
00:16:13.366  [2024-12-10 22:43:14.102479] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:16:13.366  [2024-12-10 22:43:14.118851] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 8009 ***
00:16:13.366  [2024-12-10 22:43:14.126709] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 ***
00:16:13.366  malloc0
00:16:13.625    22:43:14 sma.sma_crypto -- sma/crypto.sh@190 -- # jq -r .handle
00:16:13.625    22:43:14 sma.sma_crypto -- sma/crypto.sh@190 -- # create_device
00:16:13.625    22:43:14 sma.sma_crypto -- sma/crypto.sh@77 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:13.625  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:13.625  I0000 00:00:1733866994.348937  170154 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:13.625  I0000 00:00:1733866994.350807  170154 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:13.625  I0000 00:00:1733866994.352167  170161 subchannel.cc:806] subchannel 0x5624acf33de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5624acdd3840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5624acf4dda0, grpc.internal.client_channel_call_destination=0x7fc1e4f13390, grpc.internal.event_engine=0x5624acdc0490, grpc.internal.security_connector=0x5624acee52b0, grpc.internal.subchannel_pool=0x5624acda2690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5624acabf9a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:14.351687059+01:00"}), backing off for 999 ms
00:16:13.625  [2024-12-10 22:43:14.373070] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:16:13.625   22:43:14 sma.sma_crypto -- sma/crypto.sh@190 -- # device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:16:13.625   22:43:14 sma.sma_crypto -- sma/crypto.sh@193 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 5a790f10-948b-4632-a666-83fd8fe920c9
00:16:13.625   22:43:14 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:16:13.625   22:43:14 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:16:13.625   22:43:14 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:13.625    22:43:14 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 5a790f10-948b-4632-a666-83fd8fe920c9
00:16:13.625    22:43:14 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=5a790f10-948b-4632-a666-83fd8fe920c9 cipher= key= key2= config
00:16:13.625    22:43:14 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:16:13.625     22:43:14 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:16:13.625      22:43:14 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 5a790f10-948b-4632-a666-83fd8fe920c9
00:16:13.625      22:43:14 sma.sma_crypto -- sma/common.sh@20 -- # python
00:16:13.882    22:43:14 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "WnkPEJSLRjKmZoP9j+kgyQ==",
00:16:13.882  "nvmf": {
00:16:13.882    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:16:13.882    "discovery": {
00:16:13.882      "discovery_endpoints": [
00:16:13.882        {
00:16:13.882          "trtype": "tcp",
00:16:13.882          "traddr": "127.0.0.1",
00:16:13.882          "trsvcid": "8009"
00:16:13.882        }
00:16:13.882      ]
00:16:13.882    }
00:16:13.882  }'
00:16:13.882    22:43:14 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:16:13.882    22:43:14 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:16:13.882    22:43:14 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n '' ]]
00:16:13.882    22:43:14 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:16:13.882  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:13.882  I0000 00:00:1733866994.649150  170182 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:13.882  I0000 00:00:1733866994.650654  170182 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:13.882  I0000 00:00:1733866994.652068  170297 subchannel.cc:806] subchannel 0x56185e094de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x56185df34840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x56185e0aeda0, grpc.internal.client_channel_call_destination=0x7f54dabb9390, grpc.internal.event_engine=0x56185ddb3030, grpc.internal.security_connector=0x56185e0462b0, grpc.internal.subchannel_pool=0x56185df03690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x56185dc209a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:14.65159008+01:00"}), backing off for 1000 ms
00:16:15.257  {}
00:16:15.257    22:43:15 sma.sma_crypto -- sma/crypto.sh@195 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:16:15.257    22:43:15 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:15.257    22:43:15 sma.sma_crypto -- sma/crypto.sh@195 -- # jq -r '.[0].namespaces[0].name'
00:16:15.257    22:43:15 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:16:15.257    22:43:15 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:15.257   22:43:15 sma.sma_crypto -- sma/crypto.sh@195 -- # ns_bdev=238d48ef-eec1-4a1e-938d-71b3421bb7150n1
00:16:15.257    22:43:15 sma.sma_crypto -- sma/crypto.sh@196 -- # rpc_cmd bdev_get_bdevs -b 238d48ef-eec1-4a1e-938d-71b3421bb7150n1
00:16:15.257    22:43:15 sma.sma_crypto -- sma/crypto.sh@196 -- # jq -r '.[0].product_name'
00:16:15.257    22:43:15 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:15.257    22:43:15 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:16:15.257    22:43:15 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:15.257   22:43:15 sma.sma_crypto -- sma/crypto.sh@196 -- # [[ NVMe disk == \N\V\M\e\ \d\i\s\k ]]
00:16:15.257    22:43:15 sma.sma_crypto -- sma/crypto.sh@197 -- # rpc_cmd bdev_get_bdevs
00:16:15.257    22:43:15 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:15.257    22:43:15 sma.sma_crypto -- sma/crypto.sh@197 -- # jq -r '[.[] | select(.product_name == "crypto")] | length'
00:16:15.257    22:43:15 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:16:15.257    22:43:15 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:15.257   22:43:15 sma.sma_crypto -- sma/crypto.sh@197 -- # [[ 0 -eq 0 ]]
00:16:15.257    22:43:15 sma.sma_crypto -- sma/crypto.sh@198 -- # jq -r '.[0].namespaces[0].uuid'
00:16:15.257    22:43:15 sma.sma_crypto -- sma/crypto.sh@198 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:16:15.257    22:43:15 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:15.257    22:43:15 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:16:15.257    22:43:15 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:15.257   22:43:15 sma.sma_crypto -- sma/crypto.sh@198 -- # [[ 5a790f10-948b-4632-a666-83fd8fe920c9 == \5\a\7\9\0\f\1\0\-\9\4\8\b\-\4\6\3\2\-\a\6\6\6\-\8\3\f\d\8\f\e\9\2\0\c\9 ]]
00:16:15.257    22:43:15 sma.sma_crypto -- sma/crypto.sh@199 -- # jq -r '.[0].namespaces[0].nguid'
00:16:15.257    22:43:15 sma.sma_crypto -- sma/crypto.sh@199 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:16:15.257    22:43:15 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:15.257    22:43:15 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:16:15.257    22:43:15 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:15.257    22:43:16 sma.sma_crypto -- sma/crypto.sh@199 -- # uuid2nguid 5a790f10-948b-4632-a666-83fd8fe920c9
00:16:15.257    22:43:16 sma.sma_crypto -- sma/common.sh@40 -- # local uuid=5A790F10-948B-4632-A666-83FD8FE920C9
00:16:15.257    22:43:16 sma.sma_crypto -- sma/common.sh@41 -- # echo 5A790F10948B4632A66683FD8FE920C9
00:16:15.257   22:43:16 sma.sma_crypto -- sma/crypto.sh@199 -- # [[ 5A790F10948B4632A66683FD8FE920C9 == \5\A\7\9\0\F\1\0\9\4\8\B\4\6\3\2\A\6\6\6\8\3\F\D\8\F\E\9\2\0\C\9 ]]
00:16:15.257   22:43:16 sma.sma_crypto -- sma/crypto.sh@201 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 5a790f10-948b-4632-a666-83fd8fe920c9
00:16:15.257   22:43:16 sma.sma_crypto -- sma/crypto.sh@120 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:15.257    22:43:16 sma.sma_crypto -- sma/crypto.sh@120 -- # uuid2base64 5a790f10-948b-4632-a666-83fd8fe920c9
00:16:15.257    22:43:16 sma.sma_crypto -- sma/common.sh@20 -- # python
00:16:15.516  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:15.516  I0000 00:00:1733866996.277536  170626 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:15.516  I0000 00:00:1733866996.279149  170626 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:15.516  I0000 00:00:1733866996.280483  170638 subchannel.cc:806] subchannel 0x560aa6423de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x560aa62c3840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x560aa643dda0, grpc.internal.client_channel_call_destination=0x7f31d517e390, grpc.internal.event_engine=0x560aa62b0490, grpc.internal.security_connector=0x560aa63d52b0, grpc.internal.subchannel_pool=0x560aa6292690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x560aa5faf9a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:16.2800258+01:00"}), backing off for 1000 ms
00:16:15.774  {}
00:16:15.774   22:43:16 sma.sma_crypto -- sma/crypto.sh@204 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 5a790f10-948b-4632-a666-83fd8fe920c9 AES_CBC 1234567890abcdef1234567890abcdef
00:16:15.774   22:43:16 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:16:15.774   22:43:16 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:16:15.774   22:43:16 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:15.774    22:43:16 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 5a790f10-948b-4632-a666-83fd8fe920c9 AES_CBC 1234567890abcdef1234567890abcdef
00:16:15.774    22:43:16 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=5a790f10-948b-4632-a666-83fd8fe920c9 cipher=AES_CBC key=1234567890abcdef1234567890abcdef key2= config
00:16:15.774    22:43:16 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:16:15.774     22:43:16 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:16:15.774      22:43:16 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 5a790f10-948b-4632-a666-83fd8fe920c9
00:16:15.774      22:43:16 sma.sma_crypto -- sma/common.sh@20 -- # python
00:16:15.774    22:43:16 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "WnkPEJSLRjKmZoP9j+kgyQ==",
00:16:15.774  "nvmf": {
00:16:15.774    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:16:15.774    "discovery": {
00:16:15.774      "discovery_endpoints": [
00:16:15.774        {
00:16:15.774          "trtype": "tcp",
00:16:15.774          "traddr": "127.0.0.1",
00:16:15.774          "trsvcid": "8009"
00:16:15.774        }
00:16:15.774      ]
00:16:15.774    }
00:16:15.774  }'
00:16:15.774    22:43:16 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:16:15.774    22:43:16 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:16:15.774    22:43:16 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_CBC ]]
00:16:15.774    22:43:16 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:16:15.774     22:43:16 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_CBC
00:16:15.774     22:43:16 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:16:15.774     22:43:16 sma.sma_crypto -- sma/common.sh@28 -- # echo 0
00:16:15.774    22:43:16 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:16:15.774     22:43:16 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:16:15.774     22:43:16 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:16:15.774      22:43:16 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:16:15.774    22:43:16 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:16:15.774     22:43:16 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:16:15.774    22:43:16 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:16:15.774    "cipher": 0,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:16:15.774  }'
00:16:15.774    22:43:16 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:16:15.774    22:43:16 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:16:16.033  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:16.033  I0000 00:00:1733866996.655473  170658 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:16.033  I0000 00:00:1733866996.657154  170658 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:16.033  I0000 00:00:1733866996.658640  170675 subchannel.cc:806] subchannel 0x555cf2bf0de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x555cf2a90840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x555cf2c0ada0, grpc.internal.client_channel_call_destination=0x7fef74155390, grpc.internal.event_engine=0x555cf290f030, grpc.internal.security_connector=0x555cf2ba22b0, grpc.internal.subchannel_pool=0x555cf2a5f690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x555cf277c9a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:16.658202789+01:00"}), backing off for 1000 ms
00:16:17.408  {}
00:16:17.408    22:43:17 sma.sma_crypto -- sma/crypto.sh@206 -- # rpc_cmd bdev_nvme_get_discovery_info
00:16:17.408    22:43:17 sma.sma_crypto -- sma/crypto.sh@206 -- # jq -r '. | length'
00:16:17.408    22:43:17 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:17.408    22:43:17 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:16:17.408    22:43:17 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:17.408   22:43:17 sma.sma_crypto -- sma/crypto.sh@206 -- # [[ 1 -eq 1 ]]
00:16:17.408    22:43:17 sma.sma_crypto -- sma/crypto.sh@207 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:16:17.408    22:43:17 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:17.408    22:43:17 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:16:17.408    22:43:17 sma.sma_crypto -- sma/crypto.sh@207 -- # jq -r '.[0].namespaces | length'
00:16:17.408    22:43:17 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:17.408   22:43:17 sma.sma_crypto -- sma/crypto.sh@207 -- # [[ 1 -eq 1 ]]
00:16:17.408   22:43:17 sma.sma_crypto -- sma/crypto.sh@209 -- # verify_crypto_volume nqn.2016-06.io.spdk:cnode0 5a790f10-948b-4632-a666-83fd8fe920c9
00:16:17.408   22:43:17 sma.sma_crypto -- sma/crypto.sh@132 -- # local nqn=nqn.2016-06.io.spdk:cnode0 uuid=5a790f10-948b-4632-a666-83fd8fe920c9 ns ns_bdev
00:16:17.408    22:43:17 sma.sma_crypto -- sma/crypto.sh@134 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:16:17.408    22:43:17 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:17.408    22:43:17 sma.sma_crypto -- sma/crypto.sh@134 -- # jq -r '.[0].namespaces[0]'
00:16:17.408    22:43:17 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:16:17.408    22:43:17 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:17.408   22:43:17 sma.sma_crypto -- sma/crypto.sh@134 -- # ns='{
00:16:17.408    "nsid": 1,
00:16:17.408    "bdev_name": "4409cc76-030e-47f3-a08d-a83eeeb30519",
00:16:17.408    "name": "4409cc76-030e-47f3-a08d-a83eeeb30519",
00:16:17.408    "nguid": "5A790F10948B4632A66683FD8FE920C9",
00:16:17.409    "uuid": "5a790f10-948b-4632-a666-83fd8fe920c9"
00:16:17.409  }'
00:16:17.409    22:43:17 sma.sma_crypto -- sma/crypto.sh@135 -- # jq -r .name
00:16:17.409   22:43:18 sma.sma_crypto -- sma/crypto.sh@135 -- # ns_bdev=4409cc76-030e-47f3-a08d-a83eeeb30519
00:16:17.409    22:43:18 sma.sma_crypto -- sma/crypto.sh@138 -- # rpc_cmd bdev_get_bdevs -b 4409cc76-030e-47f3-a08d-a83eeeb30519
00:16:17.409    22:43:18 sma.sma_crypto -- sma/crypto.sh@138 -- # jq -r '.[0].product_name'
00:16:17.409    22:43:18 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:17.409    22:43:18 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:16:17.409    22:43:18 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:17.409   22:43:18 sma.sma_crypto -- sma/crypto.sh@138 -- # [[ crypto == crypto ]]
00:16:17.409    22:43:18 sma.sma_crypto -- sma/crypto.sh@139 -- # rpc_cmd bdev_get_bdevs
00:16:17.409    22:43:18 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:17.409    22:43:18 sma.sma_crypto -- sma/crypto.sh@139 -- # jq -r '[.[] | select(.product_name == "crypto")] | length'
00:16:17.409    22:43:18 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:16:17.409    22:43:18 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:17.409   22:43:18 sma.sma_crypto -- sma/crypto.sh@139 -- # [[ 1 -eq 1 ]]
00:16:17.409    22:43:18 sma.sma_crypto -- sma/crypto.sh@141 -- # jq -r .uuid
00:16:17.409   22:43:18 sma.sma_crypto -- sma/crypto.sh@141 -- # [[ 5a790f10-948b-4632-a666-83fd8fe920c9 == \5\a\7\9\0\f\1\0\-\9\4\8\b\-\4\6\3\2\-\a\6\6\6\-\8\3\f\d\8\f\e\9\2\0\c\9 ]]
00:16:17.409    22:43:18 sma.sma_crypto -- sma/crypto.sh@142 -- # jq -r .nguid
00:16:17.409    22:43:18 sma.sma_crypto -- sma/crypto.sh@142 -- # uuid2nguid 5a790f10-948b-4632-a666-83fd8fe920c9
00:16:17.409    22:43:18 sma.sma_crypto -- sma/common.sh@40 -- # local uuid=5A790F10-948B-4632-A666-83FD8FE920C9
00:16:17.409    22:43:18 sma.sma_crypto -- sma/common.sh@41 -- # echo 5A790F10948B4632A66683FD8FE920C9
00:16:17.409   22:43:18 sma.sma_crypto -- sma/crypto.sh@142 -- # [[ 5A790F10948B4632A66683FD8FE920C9 == \5\A\7\9\0\F\1\0\9\4\8\B\4\6\3\2\A\6\6\6\8\3\F\D\8\F\E\9\2\0\C\9 ]]
00:16:17.409    22:43:18 sma.sma_crypto -- sma/crypto.sh@211 -- # jq -r '.[] | select(.product_name == "crypto")'
00:16:17.409    22:43:18 sma.sma_crypto -- sma/crypto.sh@211 -- # rpc_cmd bdev_get_bdevs
00:16:17.409    22:43:18 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:17.409    22:43:18 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:16:17.409    22:43:18 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:17.668   22:43:18 sma.sma_crypto -- sma/crypto.sh@211 -- # crypto_bdev='{
00:16:17.668    "name": "4409cc76-030e-47f3-a08d-a83eeeb30519",
00:16:17.668    "aliases": [
00:16:17.668      "d918c12b-1dab-5d3d-8006-b350b4183ee9"
00:16:17.668    ],
00:16:17.668    "product_name": "crypto",
00:16:17.668    "block_size": 4096,
00:16:17.668    "num_blocks": 8192,
00:16:17.668    "uuid": "d918c12b-1dab-5d3d-8006-b350b4183ee9",
00:16:17.668    "assigned_rate_limits": {
00:16:17.668      "rw_ios_per_sec": 0,
00:16:17.668      "rw_mbytes_per_sec": 0,
00:16:17.668      "r_mbytes_per_sec": 0,
00:16:17.668      "w_mbytes_per_sec": 0
00:16:17.668    },
00:16:17.668    "claimed": true,
00:16:17.668    "claim_type": "exclusive_write",
00:16:17.668    "zoned": false,
00:16:17.668    "supported_io_types": {
00:16:17.668      "read": true,
00:16:17.668      "write": true,
00:16:17.668      "unmap": true,
00:16:17.668      "flush": true,
00:16:17.668      "reset": true,
00:16:17.668      "nvme_admin": false,
00:16:17.668      "nvme_io": false,
00:16:17.668      "nvme_io_md": false,
00:16:17.668      "write_zeroes": true,
00:16:17.668      "zcopy": false,
00:16:17.668      "get_zone_info": false,
00:16:17.668      "zone_management": false,
00:16:17.668      "zone_append": false,
00:16:17.668      "compare": false,
00:16:17.668      "compare_and_write": false,
00:16:17.668      "abort": false,
00:16:17.668      "seek_hole": false,
00:16:17.668      "seek_data": false,
00:16:17.668      "copy": false,
00:16:17.668      "nvme_iov_md": false
00:16:17.668    },
00:16:17.668    "memory_domains": [
00:16:17.668      {
00:16:17.668        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:16:17.668        "dma_device_type": 2
00:16:17.668      }
00:16:17.668    ],
00:16:17.668    "driver_specific": {
00:16:17.668      "crypto": {
00:16:17.668        "base_bdev_name": "cb819ec8-4d99-445c-bf0a-e08dfdfcf3470n1",
00:16:17.668        "name": "4409cc76-030e-47f3-a08d-a83eeeb30519",
00:16:17.668        "key_name": "4409cc76-030e-47f3-a08d-a83eeeb30519_AES_CBC"
00:16:17.668      }
00:16:17.668    }
00:16:17.668  }'
00:16:17.668    22:43:18 sma.sma_crypto -- sma/crypto.sh@212 -- # jq -r .driver_specific.crypto.key_name
00:16:17.668   22:43:18 sma.sma_crypto -- sma/crypto.sh@212 -- # key_name=4409cc76-030e-47f3-a08d-a83eeeb30519_AES_CBC
00:16:17.668    22:43:18 sma.sma_crypto -- sma/crypto.sh@213 -- # rpc_cmd accel_crypto_keys_get -k 4409cc76-030e-47f3-a08d-a83eeeb30519_AES_CBC
00:16:17.668    22:43:18 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:17.668    22:43:18 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:16:17.668    22:43:18 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:17.668   22:43:18 sma.sma_crypto -- sma/crypto.sh@213 -- # key_obj='[
00:16:17.668  {
00:16:17.668  "name": "4409cc76-030e-47f3-a08d-a83eeeb30519_AES_CBC",
00:16:17.668  "cipher": "AES_CBC",
00:16:17.668  "key": "1234567890abcdef1234567890abcdef"
00:16:17.668  }
00:16:17.668  ]'
00:16:17.668    22:43:18 sma.sma_crypto -- sma/crypto.sh@214 -- # jq -r '.[0].key'
00:16:17.668   22:43:18 sma.sma_crypto -- sma/crypto.sh@214 -- # [[ 1234567890abcdef1234567890abcdef == \1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f\1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f ]]
00:16:17.668    22:43:18 sma.sma_crypto -- sma/crypto.sh@215 -- # jq -r '.[0].cipher'
00:16:17.668   22:43:18 sma.sma_crypto -- sma/crypto.sh@215 -- # [[ AES_CBC == \A\E\S\_\C\B\C ]]
00:16:17.668   22:43:18 sma.sma_crypto -- sma/crypto.sh@218 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 5a790f10-948b-4632-a666-83fd8fe920c9 AES_CBC 1234567890abcdef1234567890abcdef
00:16:17.668   22:43:18 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:16:17.668   22:43:18 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:16:17.668   22:43:18 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:17.668    22:43:18 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 5a790f10-948b-4632-a666-83fd8fe920c9 AES_CBC 1234567890abcdef1234567890abcdef
00:16:17.668    22:43:18 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=5a790f10-948b-4632-a666-83fd8fe920c9 cipher=AES_CBC key=1234567890abcdef1234567890abcdef key2= config
00:16:17.668    22:43:18 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:16:17.668     22:43:18 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:16:17.668      22:43:18 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 5a790f10-948b-4632-a666-83fd8fe920c9
00:16:17.668      22:43:18 sma.sma_crypto -- sma/common.sh@20 -- # python
00:16:17.668    22:43:18 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "WnkPEJSLRjKmZoP9j+kgyQ==",
00:16:17.668  "nvmf": {
00:16:17.668    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:16:17.668    "discovery": {
00:16:17.668      "discovery_endpoints": [
00:16:17.668        {
00:16:17.668          "trtype": "tcp",
00:16:17.668          "traddr": "127.0.0.1",
00:16:17.668          "trsvcid": "8009"
00:16:17.668        }
00:16:17.668      ]
00:16:17.668    }
00:16:17.668  }'
00:16:17.668    22:43:18 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:16:17.668    22:43:18 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:16:17.668    22:43:18 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_CBC ]]
00:16:17.668    22:43:18 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:16:17.668     22:43:18 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_CBC
00:16:17.668     22:43:18 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:16:17.668     22:43:18 sma.sma_crypto -- sma/common.sh@28 -- # echo 0
00:16:17.668    22:43:18 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:16:17.668     22:43:18 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:16:17.668     22:43:18 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:16:17.668      22:43:18 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:16:17.668    22:43:18 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:16:17.668     22:43:18 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:16:17.668    22:43:18 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:16:17.668    "cipher": 0,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:16:17.668  }'
00:16:17.668    22:43:18 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:16:17.668    22:43:18 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:16:17.927  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:17.927  I0000 00:00:1733866998.569879  171132 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:17.927  I0000 00:00:1733866998.571707  171132 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:17.927  I0000 00:00:1733866998.573120  171151 subchannel.cc:806] subchannel 0x55ed6be65de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55ed6bd05840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55ed6be7fda0, grpc.internal.client_channel_call_destination=0x7f38ac0bf390, grpc.internal.event_engine=0x55ed6bb84030, grpc.internal.security_connector=0x55ed6be172b0, grpc.internal.subchannel_pool=0x55ed6bcd4690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55ed6b9f19a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:18.572608459+01:00"}), backing off for 1000 ms
00:16:17.927  {}
00:16:17.927    22:43:18 sma.sma_crypto -- sma/crypto.sh@221 -- # jq -r '. | length'
00:16:17.927    22:43:18 sma.sma_crypto -- sma/crypto.sh@221 -- # rpc_cmd bdev_nvme_get_discovery_info
00:16:17.927    22:43:18 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:17.927    22:43:18 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:16:17.927    22:43:18 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:17.927   22:43:18 sma.sma_crypto -- sma/crypto.sh@221 -- # [[ 1 -eq 1 ]]
00:16:17.927    22:43:18 sma.sma_crypto -- sma/crypto.sh@222 -- # jq -r '.[0].namespaces | length'
00:16:17.927    22:43:18 sma.sma_crypto -- sma/crypto.sh@222 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:16:17.927    22:43:18 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:17.927    22:43:18 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:16:17.927    22:43:18 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:17.927   22:43:18 sma.sma_crypto -- sma/crypto.sh@222 -- # [[ 1 -eq 1 ]]
00:16:18.186   22:43:18 sma.sma_crypto -- sma/crypto.sh@223 -- # verify_crypto_volume nqn.2016-06.io.spdk:cnode0 5a790f10-948b-4632-a666-83fd8fe920c9
00:16:18.187   22:43:18 sma.sma_crypto -- sma/crypto.sh@132 -- # local nqn=nqn.2016-06.io.spdk:cnode0 uuid=5a790f10-948b-4632-a666-83fd8fe920c9 ns ns_bdev
00:16:18.187    22:43:18 sma.sma_crypto -- sma/crypto.sh@134 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:16:18.187    22:43:18 sma.sma_crypto -- sma/crypto.sh@134 -- # jq -r '.[0].namespaces[0]'
00:16:18.187    22:43:18 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:18.187    22:43:18 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:16:18.187    22:43:18 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:18.187   22:43:18 sma.sma_crypto -- sma/crypto.sh@134 -- # ns='{
00:16:18.187    "nsid": 1,
00:16:18.187    "bdev_name": "4409cc76-030e-47f3-a08d-a83eeeb30519",
00:16:18.187    "name": "4409cc76-030e-47f3-a08d-a83eeeb30519",
00:16:18.187    "nguid": "5A790F10948B4632A66683FD8FE920C9",
00:16:18.187    "uuid": "5a790f10-948b-4632-a666-83fd8fe920c9"
00:16:18.187  }'
00:16:18.187    22:43:18 sma.sma_crypto -- sma/crypto.sh@135 -- # jq -r .name
00:16:18.187   22:43:18 sma.sma_crypto -- sma/crypto.sh@135 -- # ns_bdev=4409cc76-030e-47f3-a08d-a83eeeb30519
00:16:18.187    22:43:18 sma.sma_crypto -- sma/crypto.sh@138 -- # rpc_cmd bdev_get_bdevs -b 4409cc76-030e-47f3-a08d-a83eeeb30519
00:16:18.187    22:43:18 sma.sma_crypto -- sma/crypto.sh@138 -- # jq -r '.[0].product_name'
00:16:18.187    22:43:18 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:18.187    22:43:18 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:16:18.187    22:43:18 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:18.187   22:43:18 sma.sma_crypto -- sma/crypto.sh@138 -- # [[ crypto == crypto ]]
00:16:18.187    22:43:18 sma.sma_crypto -- sma/crypto.sh@139 -- # jq -r '[.[] | select(.product_name == "crypto")] | length'
00:16:18.187    22:43:18 sma.sma_crypto -- sma/crypto.sh@139 -- # rpc_cmd bdev_get_bdevs
00:16:18.187    22:43:18 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:18.187    22:43:18 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:16:18.187    22:43:18 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:18.187   22:43:18 sma.sma_crypto -- sma/crypto.sh@139 -- # [[ 1 -eq 1 ]]
00:16:18.187    22:43:18 sma.sma_crypto -- sma/crypto.sh@141 -- # jq -r .uuid
00:16:18.187   22:43:18 sma.sma_crypto -- sma/crypto.sh@141 -- # [[ 5a790f10-948b-4632-a666-83fd8fe920c9 == \5\a\7\9\0\f\1\0\-\9\4\8\b\-\4\6\3\2\-\a\6\6\6\-\8\3\f\d\8\f\e\9\2\0\c\9 ]]
00:16:18.187    22:43:18 sma.sma_crypto -- sma/crypto.sh@142 -- # jq -r .nguid
00:16:18.187    22:43:18 sma.sma_crypto -- sma/crypto.sh@142 -- # uuid2nguid 5a790f10-948b-4632-a666-83fd8fe920c9
00:16:18.187    22:43:18 sma.sma_crypto -- sma/common.sh@40 -- # local uuid=5A790F10-948B-4632-A666-83FD8FE920C9
00:16:18.187    22:43:18 sma.sma_crypto -- sma/common.sh@41 -- # echo 5A790F10948B4632A66683FD8FE920C9
00:16:18.187   22:43:18 sma.sma_crypto -- sma/crypto.sh@142 -- # [[ 5A790F10948B4632A66683FD8FE920C9 == \5\A\7\9\0\F\1\0\9\4\8\B\4\6\3\2\A\6\6\6\8\3\F\D\8\F\E\9\2\0\C\9 ]]
00:16:18.187    22:43:18 sma.sma_crypto -- sma/crypto.sh@224 -- # rpc_cmd bdev_get_bdevs
00:16:18.187    22:43:18 sma.sma_crypto -- sma/crypto.sh@224 -- # jq -r '.[] | select(.product_name == "crypto")'
00:16:18.187    22:43:18 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:18.187    22:43:18 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:16:18.187    22:43:18 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:18.187   22:43:18 sma.sma_crypto -- sma/crypto.sh@224 -- # crypto_bdev2='{
00:16:18.187    "name": "4409cc76-030e-47f3-a08d-a83eeeb30519",
00:16:18.187    "aliases": [
00:16:18.187      "d918c12b-1dab-5d3d-8006-b350b4183ee9"
00:16:18.187    ],
00:16:18.187    "product_name": "crypto",
00:16:18.187    "block_size": 4096,
00:16:18.187    "num_blocks": 8192,
00:16:18.187    "uuid": "d918c12b-1dab-5d3d-8006-b350b4183ee9",
00:16:18.187    "assigned_rate_limits": {
00:16:18.187      "rw_ios_per_sec": 0,
00:16:18.187      "rw_mbytes_per_sec": 0,
00:16:18.187      "r_mbytes_per_sec": 0,
00:16:18.187      "w_mbytes_per_sec": 0
00:16:18.187    },
00:16:18.187    "claimed": true,
00:16:18.187    "claim_type": "exclusive_write",
00:16:18.187    "zoned": false,
00:16:18.187    "supported_io_types": {
00:16:18.187      "read": true,
00:16:18.187      "write": true,
00:16:18.187      "unmap": true,
00:16:18.187      "flush": true,
00:16:18.187      "reset": true,
00:16:18.187      "nvme_admin": false,
00:16:18.187      "nvme_io": false,
00:16:18.187      "nvme_io_md": false,
00:16:18.187      "write_zeroes": true,
00:16:18.187      "zcopy": false,
00:16:18.187      "get_zone_info": false,
00:16:18.187      "zone_management": false,
00:16:18.187      "zone_append": false,
00:16:18.187      "compare": false,
00:16:18.187      "compare_and_write": false,
00:16:18.187      "abort": false,
00:16:18.187      "seek_hole": false,
00:16:18.187      "seek_data": false,
00:16:18.187      "copy": false,
00:16:18.187      "nvme_iov_md": false
00:16:18.187    },
00:16:18.187    "memory_domains": [
00:16:18.187      {
00:16:18.187        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:16:18.187        "dma_device_type": 2
00:16:18.187      }
00:16:18.187    ],
00:16:18.187    "driver_specific": {
00:16:18.187      "crypto": {
00:16:18.187        "base_bdev_name": "cb819ec8-4d99-445c-bf0a-e08dfdfcf3470n1",
00:16:18.187        "name": "4409cc76-030e-47f3-a08d-a83eeeb30519",
00:16:18.187        "key_name": "4409cc76-030e-47f3-a08d-a83eeeb30519_AES_CBC"
00:16:18.187      }
00:16:18.187    }
00:16:18.187  }'
00:16:18.446    22:43:18 sma.sma_crypto -- sma/crypto.sh@225 -- # jq -r .name
00:16:18.446    22:43:19 sma.sma_crypto -- sma/crypto.sh@225 -- # jq -r .name
00:16:18.446   22:43:19 sma.sma_crypto -- sma/crypto.sh@225 -- # [[ 4409cc76-030e-47f3-a08d-a83eeeb30519 == 4409cc76-030e-47f3-a08d-a83eeeb30519 ]]
00:16:18.446    22:43:19 sma.sma_crypto -- sma/crypto.sh@226 -- # jq -r .driver_specific.crypto.key_name
00:16:18.446   22:43:19 sma.sma_crypto -- sma/crypto.sh@226 -- # key_name=4409cc76-030e-47f3-a08d-a83eeeb30519_AES_CBC
00:16:18.446    22:43:19 sma.sma_crypto -- sma/crypto.sh@227 -- # rpc_cmd accel_crypto_keys_get -k 4409cc76-030e-47f3-a08d-a83eeeb30519_AES_CBC
00:16:18.446    22:43:19 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:18.446    22:43:19 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:16:18.446    22:43:19 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:18.446   22:43:19 sma.sma_crypto -- sma/crypto.sh@227 -- # key_obj='[
00:16:18.446  {
00:16:18.446  "name": "4409cc76-030e-47f3-a08d-a83eeeb30519_AES_CBC",
00:16:18.446  "cipher": "AES_CBC",
00:16:18.446  "key": "1234567890abcdef1234567890abcdef"
00:16:18.446  }
00:16:18.446  ]'
00:16:18.446    22:43:19 sma.sma_crypto -- sma/crypto.sh@228 -- # jq -r '.[0].key'
00:16:18.446   22:43:19 sma.sma_crypto -- sma/crypto.sh@228 -- # [[ 1234567890abcdef1234567890abcdef == \1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f\1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f ]]
00:16:18.446    22:43:19 sma.sma_crypto -- sma/crypto.sh@229 -- # jq -r '.[0].cipher'
00:16:18.446   22:43:19 sma.sma_crypto -- sma/crypto.sh@229 -- # [[ AES_CBC == \A\E\S\_\C\B\C ]]
00:16:18.446   22:43:19 sma.sma_crypto -- sma/crypto.sh@232 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 5a790f10-948b-4632-a666-83fd8fe920c9 AES_XTS 1234567890abcdef1234567890abcdef
00:16:18.446   22:43:19 sma.sma_crypto -- common/autotest_common.sh@652 -- # local es=0
00:16:18.446   22:43:19 sma.sma_crypto -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 5a790f10-948b-4632-a666-83fd8fe920c9 AES_XTS 1234567890abcdef1234567890abcdef
00:16:18.446   22:43:19 sma.sma_crypto -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:16:18.446   22:43:19 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:18.446    22:43:19 sma.sma_crypto -- common/autotest_common.sh@644 -- # type -t attach_volume
00:16:18.446   22:43:19 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:18.446   22:43:19 sma.sma_crypto -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 5a790f10-948b-4632-a666-83fd8fe920c9 AES_XTS 1234567890abcdef1234567890abcdef
00:16:18.446   22:43:19 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:16:18.446   22:43:19 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:16:18.446   22:43:19 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:18.446    22:43:19 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 5a790f10-948b-4632-a666-83fd8fe920c9 AES_XTS 1234567890abcdef1234567890abcdef
00:16:18.446    22:43:19 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=5a790f10-948b-4632-a666-83fd8fe920c9 cipher=AES_XTS key=1234567890abcdef1234567890abcdef key2= config
00:16:18.446    22:43:19 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:16:18.446     22:43:19 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:16:18.446      22:43:19 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 5a790f10-948b-4632-a666-83fd8fe920c9
00:16:18.446      22:43:19 sma.sma_crypto -- sma/common.sh@20 -- # python
00:16:18.446    22:43:19 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "WnkPEJSLRjKmZoP9j+kgyQ==",
00:16:18.446  "nvmf": {
00:16:18.446    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:16:18.446    "discovery": {
00:16:18.446      "discovery_endpoints": [
00:16:18.446        {
00:16:18.446          "trtype": "tcp",
00:16:18.446          "traddr": "127.0.0.1",
00:16:18.446          "trsvcid": "8009"
00:16:18.446        }
00:16:18.446      ]
00:16:18.446    }
00:16:18.446  }'
00:16:18.446    22:43:19 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:16:18.446    22:43:19 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:16:18.446    22:43:19 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_XTS ]]
00:16:18.446    22:43:19 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:16:18.446     22:43:19 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_XTS
00:16:18.446     22:43:19 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:16:18.446     22:43:19 sma.sma_crypto -- sma/common.sh@29 -- # echo 1
00:16:18.446    22:43:19 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:16:18.446     22:43:19 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:16:18.446     22:43:19 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:16:18.446      22:43:19 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:16:18.446    22:43:19 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:16:18.446     22:43:19 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:16:18.446    22:43:19 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:16:18.446    "cipher": 1,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:16:18.446  }'
00:16:18.446    22:43:19 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:16:18.446    22:43:19 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:16:18.705  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:18.705  I0000 00:00:1733866999.467510  171209 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:18.705  I0000 00:00:1733866999.469129  171209 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:18.705  I0000 00:00:1733866999.470605  171423 subchannel.cc:806] subchannel 0x56075e692de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x56075e532840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x56075e6acda0, grpc.internal.client_channel_call_destination=0x7f10124ed390, grpc.internal.event_engine=0x56075e3b1030, grpc.internal.security_connector=0x56075e6442b0, grpc.internal.subchannel_pool=0x56075e501690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x56075e21e9a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:19.470182015+01:00"}), backing off for 1000 ms
00:16:18.705  Traceback (most recent call last):
00:16:18.705    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:16:18.705      main(sys.argv[1:])
00:16:18.705    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:16:18.705      result = client.call(request['method'], request.get('params', {}))
00:16:18.705               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:18.705    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:16:18.705      response = func(request=json_format.ParseDict(params, input()))
00:16:18.705                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:18.705    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:16:18.705      return _end_unary_response_blocking(state, call, False, None)
00:16:18.705             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:18.705    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:16:18.705      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:16:18.705      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:18.705  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:16:18.705  	status = StatusCode.INVALID_ARGUMENT
00:16:18.705  	details = "Invalid volume crypto configuration: bad cipher"
00:16:18.705  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Invalid volume crypto configuration: bad cipher", grpc_status:3, created_time:"2024-12-10T22:43:19.485635104+01:00"}"
00:16:18.705  >
00:16:18.964   22:43:19 sma.sma_crypto -- common/autotest_common.sh@655 -- # es=1
00:16:18.964   22:43:19 sma.sma_crypto -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:18.964   22:43:19 sma.sma_crypto -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:18.964   22:43:19 sma.sma_crypto -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:18.964   22:43:19 sma.sma_crypto -- sma/crypto.sh@234 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 5a790f10-948b-4632-a666-83fd8fe920c9 AES_CBC deadbeefcafebabefeedbeefbabecafe
00:16:18.964   22:43:19 sma.sma_crypto -- common/autotest_common.sh@652 -- # local es=0
00:16:18.964   22:43:19 sma.sma_crypto -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 5a790f10-948b-4632-a666-83fd8fe920c9 AES_CBC deadbeefcafebabefeedbeefbabecafe
00:16:18.964   22:43:19 sma.sma_crypto -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:16:18.964   22:43:19 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:18.964    22:43:19 sma.sma_crypto -- common/autotest_common.sh@644 -- # type -t attach_volume
00:16:18.964   22:43:19 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:18.964   22:43:19 sma.sma_crypto -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 5a790f10-948b-4632-a666-83fd8fe920c9 AES_CBC deadbeefcafebabefeedbeefbabecafe
00:16:18.964   22:43:19 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:16:18.964   22:43:19 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:16:18.964   22:43:19 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:18.964    22:43:19 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 5a790f10-948b-4632-a666-83fd8fe920c9 AES_CBC deadbeefcafebabefeedbeefbabecafe
00:16:18.964    22:43:19 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=5a790f10-948b-4632-a666-83fd8fe920c9 cipher=AES_CBC key=deadbeefcafebabefeedbeefbabecafe key2= config
00:16:18.964    22:43:19 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:16:18.964     22:43:19 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:16:18.964      22:43:19 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 5a790f10-948b-4632-a666-83fd8fe920c9
00:16:18.964      22:43:19 sma.sma_crypto -- sma/common.sh@20 -- # python
00:16:18.964    22:43:19 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "WnkPEJSLRjKmZoP9j+kgyQ==",
00:16:18.964  "nvmf": {
00:16:18.964    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:16:18.964    "discovery": {
00:16:18.964      "discovery_endpoints": [
00:16:18.964        {
00:16:18.964          "trtype": "tcp",
00:16:18.964          "traddr": "127.0.0.1",
00:16:18.964          "trsvcid": "8009"
00:16:18.964        }
00:16:18.964      ]
00:16:18.964    }
00:16:18.964  }'
00:16:18.964    22:43:19 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:16:18.964    22:43:19 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:16:18.964    22:43:19 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_CBC ]]
00:16:18.964    22:43:19 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:16:18.964     22:43:19 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_CBC
00:16:18.964     22:43:19 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:16:18.964     22:43:19 sma.sma_crypto -- sma/common.sh@28 -- # echo 0
00:16:18.964    22:43:19 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:16:18.964     22:43:19 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key deadbeefcafebabefeedbeefbabecafe
00:16:18.964     22:43:19 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:16:18.964      22:43:19 sma.sma_crypto -- sma/common.sh@35 -- # echo -n deadbeefcafebabefeedbeefbabecafe
00:16:18.964    22:43:19 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:16:18.964     22:43:19 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:16:18.964    22:43:19 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:16:18.964    "cipher": 0,"key": "ZGVhZGJlZWZjYWZlYmFiZWZlZWRiZWVmYmFiZWNhZmU="
00:16:18.964  }'
00:16:18.964    22:43:19 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:16:18.964    22:43:19 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:16:19.224  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:19.224  I0000 00:00:1733866999.772229  171444 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:19.224  I0000 00:00:1733866999.773942  171444 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:19.224  I0000 00:00:1733866999.775359  171457 subchannel.cc:806] subchannel 0x55d38f42dde0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55d38f2cd840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55d38f447da0, grpc.internal.client_channel_call_destination=0x7fe8c0d83390, grpc.internal.event_engine=0x55d38f14c030, grpc.internal.security_connector=0x55d38f3df2b0, grpc.internal.subchannel_pool=0x55d38f29c690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55d38efb99a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:19.774867517+01:00"}), backing off for 999 ms
00:16:19.224  Traceback (most recent call last):
00:16:19.224    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:16:19.224      main(sys.argv[1:])
00:16:19.224    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:16:19.224      result = client.call(request['method'], request.get('params', {}))
00:16:19.224               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:19.224    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:16:19.224      response = func(request=json_format.ParseDict(params, input()))
00:16:19.224                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:19.224    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:16:19.224      return _end_unary_response_blocking(state, call, False, None)
00:16:19.224             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:19.224    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:16:19.224      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:16:19.224      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:19.224  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:16:19.224  	status = StatusCode.INVALID_ARGUMENT
00:16:19.224  	details = "Invalid volume crypto configuration: bad key"
00:16:19.224  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-12-10T22:43:19.790635915+01:00", grpc_status:3, grpc_message:"Invalid volume crypto configuration: bad key"}"
00:16:19.224  >
00:16:19.224   22:43:19 sma.sma_crypto -- common/autotest_common.sh@655 -- # es=1
00:16:19.224   22:43:19 sma.sma_crypto -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:19.224   22:43:19 sma.sma_crypto -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:19.224   22:43:19 sma.sma_crypto -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:19.224   22:43:19 sma.sma_crypto -- sma/crypto.sh@236 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 5a790f10-948b-4632-a666-83fd8fe920c9 AES_CBC 1234567890abcdef1234567890abcdef deadbeefcafebabefeedbeefbabecafe
00:16:19.224   22:43:19 sma.sma_crypto -- common/autotest_common.sh@652 -- # local es=0
00:16:19.224   22:43:19 sma.sma_crypto -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 5a790f10-948b-4632-a666-83fd8fe920c9 AES_CBC 1234567890abcdef1234567890abcdef deadbeefcafebabefeedbeefbabecafe
00:16:19.224   22:43:19 sma.sma_crypto -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:16:19.224   22:43:19 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:19.224    22:43:19 sma.sma_crypto -- common/autotest_common.sh@644 -- # type -t attach_volume
00:16:19.224   22:43:19 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:19.224   22:43:19 sma.sma_crypto -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 5a790f10-948b-4632-a666-83fd8fe920c9 AES_CBC 1234567890abcdef1234567890abcdef deadbeefcafebabefeedbeefbabecafe
00:16:19.224   22:43:19 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:16:19.224   22:43:19 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:16:19.224   22:43:19 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:19.224    22:43:19 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 5a790f10-948b-4632-a666-83fd8fe920c9 AES_CBC 1234567890abcdef1234567890abcdef deadbeefcafebabefeedbeefbabecafe
00:16:19.224    22:43:19 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=5a790f10-948b-4632-a666-83fd8fe920c9 cipher=AES_CBC key=1234567890abcdef1234567890abcdef key2=deadbeefcafebabefeedbeefbabecafe config
00:16:19.224    22:43:19 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:16:19.224     22:43:19 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:16:19.224      22:43:19 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 5a790f10-948b-4632-a666-83fd8fe920c9
00:16:19.224      22:43:19 sma.sma_crypto -- sma/common.sh@20 -- # python
00:16:19.224    22:43:19 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "WnkPEJSLRjKmZoP9j+kgyQ==",
00:16:19.224  "nvmf": {
00:16:19.224    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:16:19.224    "discovery": {
00:16:19.224      "discovery_endpoints": [
00:16:19.224        {
00:16:19.224          "trtype": "tcp",
00:16:19.224          "traddr": "127.0.0.1",
00:16:19.224          "trsvcid": "8009"
00:16:19.224        }
00:16:19.224      ]
00:16:19.224    }
00:16:19.224  }'
00:16:19.224    22:43:19 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:16:19.224    22:43:19 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:16:19.224    22:43:19 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_CBC ]]
00:16:19.224    22:43:19 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:16:19.224     22:43:19 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_CBC
00:16:19.224     22:43:19 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:16:19.224     22:43:19 sma.sma_crypto -- sma/common.sh@28 -- # echo 0
00:16:19.224    22:43:19 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:16:19.224     22:43:19 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:16:19.224     22:43:19 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:16:19.224      22:43:19 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:16:19.224    22:43:19 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n deadbeefcafebabefeedbeefbabecafe ]]
00:16:19.224    22:43:19 sma.sma_crypto -- sma/crypto.sh@55 -- # crypto+=("\"key2\": \"$(format_key $key2)\"")
00:16:19.224     22:43:19 sma.sma_crypto -- sma/crypto.sh@55 -- # format_key deadbeefcafebabefeedbeefbabecafe
00:16:19.224     22:43:19 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:16:19.224      22:43:19 sma.sma_crypto -- sma/common.sh@35 -- # echo -n deadbeefcafebabefeedbeefbabecafe
00:16:19.224     22:43:19 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:16:19.224    22:43:19 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:16:19.224    "cipher": 0,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY=","key2": "ZGVhZGJlZWZjYWZlYmFiZWZlZWRiZWVmYmFiZWNhZmU="
00:16:19.224  }'
00:16:19.224    22:43:19 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:16:19.224    22:43:19 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:16:19.484  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:19.484  I0000 00:00:1733867000.104263  171478 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:19.484  I0000 00:00:1733867000.106143  171478 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:19.484  I0000 00:00:1733867000.107648  171494 subchannel.cc:806] subchannel 0x55812f520de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55812f3c0840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55812f53ada0, grpc.internal.client_channel_call_destination=0x7fd6e4b22390, grpc.internal.event_engine=0x55812f434e50, grpc.internal.security_connector=0x55812f434de0, grpc.internal.subchannel_pool=0x55812f4d1ef0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55812f52ab40, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:20.107149264+01:00"}), backing off for 1000 ms
00:16:19.484  Traceback (most recent call last):
00:16:19.484    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:16:19.484      main(sys.argv[1:])
00:16:19.484    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:16:19.484      result = client.call(request['method'], request.get('params', {}))
00:16:19.484               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:19.484    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:16:19.484      response = func(request=json_format.ParseDict(params, input()))
00:16:19.484                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:19.484    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:16:19.484      return _end_unary_response_blocking(state, call, False, None)
00:16:19.484             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:19.484    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:16:19.484      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:16:19.484      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:19.484  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:16:19.484  	status = StatusCode.INVALID_ARGUMENT
00:16:19.484  	details = "Invalid volume crypto configuration: bad key2"
00:16:19.484  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-12-10T22:43:20.123702837+01:00", grpc_status:3, grpc_message:"Invalid volume crypto configuration: bad key2"}"
00:16:19.484  >
00:16:19.484   22:43:20 sma.sma_crypto -- common/autotest_common.sh@655 -- # es=1
00:16:19.484   22:43:20 sma.sma_crypto -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:19.484   22:43:20 sma.sma_crypto -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:19.484   22:43:20 sma.sma_crypto -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:19.484   22:43:20 sma.sma_crypto -- sma/crypto.sh@238 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 5a790f10-948b-4632-a666-83fd8fe920c9 8 1234567890abcdef1234567890abcdef
00:16:19.484   22:43:20 sma.sma_crypto -- common/autotest_common.sh@652 -- # local es=0
00:16:19.484   22:43:20 sma.sma_crypto -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 5a790f10-948b-4632-a666-83fd8fe920c9 8 1234567890abcdef1234567890abcdef
00:16:19.484   22:43:20 sma.sma_crypto -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:16:19.484   22:43:20 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:19.484    22:43:20 sma.sma_crypto -- common/autotest_common.sh@644 -- # type -t attach_volume
00:16:19.484   22:43:20 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:19.484   22:43:20 sma.sma_crypto -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 5a790f10-948b-4632-a666-83fd8fe920c9 8 1234567890abcdef1234567890abcdef
00:16:19.484   22:43:20 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:16:19.484   22:43:20 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:16:19.484   22:43:20 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:19.484    22:43:20 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 5a790f10-948b-4632-a666-83fd8fe920c9 8 1234567890abcdef1234567890abcdef
00:16:19.484    22:43:20 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=5a790f10-948b-4632-a666-83fd8fe920c9 cipher=8 key=1234567890abcdef1234567890abcdef key2= config
00:16:19.484    22:43:20 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:16:19.484     22:43:20 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:16:19.484      22:43:20 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 5a790f10-948b-4632-a666-83fd8fe920c9
00:16:19.484      22:43:20 sma.sma_crypto -- sma/common.sh@20 -- # python
00:16:19.484    22:43:20 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "WnkPEJSLRjKmZoP9j+kgyQ==",
00:16:19.484  "nvmf": {
00:16:19.484    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:16:19.484    "discovery": {
00:16:19.484      "discovery_endpoints": [
00:16:19.484        {
00:16:19.484          "trtype": "tcp",
00:16:19.484          "traddr": "127.0.0.1",
00:16:19.484          "trsvcid": "8009"
00:16:19.484        }
00:16:19.484      ]
00:16:19.484    }
00:16:19.484  }'
00:16:19.484    22:43:20 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:16:19.484    22:43:20 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:16:19.484    22:43:20 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n 8 ]]
00:16:19.484    22:43:20 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:16:19.484     22:43:20 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher 8
00:16:19.484     22:43:20 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:16:19.484     22:43:20 sma.sma_crypto -- sma/common.sh@30 -- # echo 8
00:16:19.484    22:43:20 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:16:19.484     22:43:20 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:16:19.484     22:43:20 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:16:19.484      22:43:20 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:16:19.484    22:43:20 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:16:19.484     22:43:20 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:16:19.484    22:43:20 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:16:19.484    "cipher": 8,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:16:19.484  }'
00:16:19.484    22:43:20 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:16:19.484    22:43:20 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:16:19.743  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:19.743  I0000 00:00:1733867000.405529  171515 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:19.743  I0000 00:00:1733867000.407303  171515 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:19.743  I0000 00:00:1733867000.408723  171538 subchannel.cc:806] subchannel 0x5637d6281de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5637d6121840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5637d629bda0, grpc.internal.client_channel_call_destination=0x7faa4d9d8390, grpc.internal.event_engine=0x5637d5fa0030, grpc.internal.security_connector=0x5637d62332b0, grpc.internal.subchannel_pool=0x5637d60f0690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5637d5e0d9a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:20.408318582+01:00"}), backing off for 1000 ms
00:16:19.743  Traceback (most recent call last):
00:16:19.743    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:16:19.743      main(sys.argv[1:])
00:16:19.743    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:16:19.743      result = client.call(request['method'], request.get('params', {}))
00:16:19.743               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:19.743    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:16:19.743      response = func(request=json_format.ParseDict(params, input()))
00:16:19.743                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:19.743    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:16:19.743      return _end_unary_response_blocking(state, call, False, None)
00:16:19.743             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:19.743    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:16:19.743      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:16:19.743      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:19.743  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:16:19.743  	status = StatusCode.INVALID_ARGUMENT
00:16:19.743  	details = "Invalid volume crypto configuration: bad cipher"
00:16:19.743  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Invalid volume crypto configuration: bad cipher", grpc_status:3, created_time:"2024-12-10T22:43:20.424716091+01:00"}"
00:16:19.743  >
00:16:19.743   22:43:20 sma.sma_crypto -- common/autotest_common.sh@655 -- # es=1
00:16:19.744   22:43:20 sma.sma_crypto -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:19.744   22:43:20 sma.sma_crypto -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:19.744   22:43:20 sma.sma_crypto -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:19.744   22:43:20 sma.sma_crypto -- sma/crypto.sh@241 -- # verify_crypto_volume nqn.2016-06.io.spdk:cnode0 5a790f10-948b-4632-a666-83fd8fe920c9
00:16:19.744   22:43:20 sma.sma_crypto -- sma/crypto.sh@132 -- # local nqn=nqn.2016-06.io.spdk:cnode0 uuid=5a790f10-948b-4632-a666-83fd8fe920c9 ns ns_bdev
00:16:19.744    22:43:20 sma.sma_crypto -- sma/crypto.sh@134 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:16:19.744    22:43:20 sma.sma_crypto -- sma/crypto.sh@134 -- # jq -r '.[0].namespaces[0]'
00:16:19.744    22:43:20 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:19.744    22:43:20 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:16:19.744    22:43:20 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:19.744   22:43:20 sma.sma_crypto -- sma/crypto.sh@134 -- # ns='{
00:16:19.744    "nsid": 1,
00:16:19.744    "bdev_name": "4409cc76-030e-47f3-a08d-a83eeeb30519",
00:16:19.744    "name": "4409cc76-030e-47f3-a08d-a83eeeb30519",
00:16:19.744    "nguid": "5A790F10948B4632A66683FD8FE920C9",
00:16:19.744    "uuid": "5a790f10-948b-4632-a666-83fd8fe920c9"
00:16:19.744  }'
00:16:19.744    22:43:20 sma.sma_crypto -- sma/crypto.sh@135 -- # jq -r .name
00:16:20.003   22:43:20 sma.sma_crypto -- sma/crypto.sh@135 -- # ns_bdev=4409cc76-030e-47f3-a08d-a83eeeb30519
00:16:20.003    22:43:20 sma.sma_crypto -- sma/crypto.sh@138 -- # rpc_cmd bdev_get_bdevs -b 4409cc76-030e-47f3-a08d-a83eeeb30519
00:16:20.003    22:43:20 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:20.003    22:43:20 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:16:20.003    22:43:20 sma.sma_crypto -- sma/crypto.sh@138 -- # jq -r '.[0].product_name'
00:16:20.003    22:43:20 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:20.003   22:43:20 sma.sma_crypto -- sma/crypto.sh@138 -- # [[ crypto == crypto ]]
00:16:20.003    22:43:20 sma.sma_crypto -- sma/crypto.sh@139 -- # jq -r '[.[] | select(.product_name == "crypto")] | length'
00:16:20.003    22:43:20 sma.sma_crypto -- sma/crypto.sh@139 -- # rpc_cmd bdev_get_bdevs
00:16:20.003    22:43:20 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:20.003    22:43:20 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:16:20.003    22:43:20 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:20.003   22:43:20 sma.sma_crypto -- sma/crypto.sh@139 -- # [[ 1 -eq 1 ]]
00:16:20.003    22:43:20 sma.sma_crypto -- sma/crypto.sh@141 -- # jq -r .uuid
00:16:20.003   22:43:20 sma.sma_crypto -- sma/crypto.sh@141 -- # [[ 5a790f10-948b-4632-a666-83fd8fe920c9 == \5\a\7\9\0\f\1\0\-\9\4\8\b\-\4\6\3\2\-\a\6\6\6\-\8\3\f\d\8\f\e\9\2\0\c\9 ]]
00:16:20.003    22:43:20 sma.sma_crypto -- sma/crypto.sh@142 -- # jq -r .nguid
00:16:20.003    22:43:20 sma.sma_crypto -- sma/crypto.sh@142 -- # uuid2nguid 5a790f10-948b-4632-a666-83fd8fe920c9
00:16:20.003    22:43:20 sma.sma_crypto -- sma/common.sh@40 -- # local uuid=5A790F10-948B-4632-A666-83FD8FE920C9
00:16:20.003    22:43:20 sma.sma_crypto -- sma/common.sh@41 -- # echo 5A790F10948B4632A66683FD8FE920C9
00:16:20.003   22:43:20 sma.sma_crypto -- sma/crypto.sh@142 -- # [[ 5A790F10948B4632A66683FD8FE920C9 == \5\A\7\9\0\F\1\0\9\4\8\B\4\6\3\2\A\6\6\6\8\3\F\D\8\F\E\9\2\0\C\9 ]]
00:16:20.003   22:43:20 sma.sma_crypto -- sma/crypto.sh@243 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 5a790f10-948b-4632-a666-83fd8fe920c9
00:16:20.003   22:43:20 sma.sma_crypto -- sma/crypto.sh@120 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:20.003    22:43:20 sma.sma_crypto -- sma/crypto.sh@120 -- # uuid2base64 5a790f10-948b-4632-a666-83fd8fe920c9
00:16:20.003    22:43:20 sma.sma_crypto -- sma/common.sh@20 -- # python
00:16:20.263  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:20.263  I0000 00:00:1733867000.958821  171769 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:20.263  I0000 00:00:1733867000.960494  171769 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:20.263  I0000 00:00:1733867000.961756  171773 subchannel.cc:806] subchannel 0x5651bf9c4de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5651bf864840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5651bf9deda0, grpc.internal.client_channel_call_destination=0x7fb76b630390, grpc.internal.event_engine=0x5651bf851490, grpc.internal.security_connector=0x5651bf9762b0, grpc.internal.subchannel_pool=0x5651bf833690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5651bf5509a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:20.961290474+01:00"}), backing off for 1000 ms
00:16:20.263  {}
00:16:20.523   22:43:21 sma.sma_crypto -- sma/crypto.sh@247 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 5a790f10-948b-4632-a666-83fd8fe920c9 8 1234567890abcdef1234567890abcdef
00:16:20.523   22:43:21 sma.sma_crypto -- common/autotest_common.sh@652 -- # local es=0
00:16:20.523   22:43:21 sma.sma_crypto -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 5a790f10-948b-4632-a666-83fd8fe920c9 8 1234567890abcdef1234567890abcdef
00:16:20.523   22:43:21 sma.sma_crypto -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:16:20.523   22:43:21 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:20.523    22:43:21 sma.sma_crypto -- common/autotest_common.sh@644 -- # type -t attach_volume
00:16:20.523   22:43:21 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:20.523   22:43:21 sma.sma_crypto -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 5a790f10-948b-4632-a666-83fd8fe920c9 8 1234567890abcdef1234567890abcdef
00:16:20.523   22:43:21 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:16:20.523   22:43:21 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:16:20.523   22:43:21 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:20.523    22:43:21 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 5a790f10-948b-4632-a666-83fd8fe920c9 8 1234567890abcdef1234567890abcdef
00:16:20.523    22:43:21 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=5a790f10-948b-4632-a666-83fd8fe920c9 cipher=8 key=1234567890abcdef1234567890abcdef key2= config
00:16:20.523    22:43:21 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:16:20.523     22:43:21 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:16:20.523      22:43:21 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 5a790f10-948b-4632-a666-83fd8fe920c9
00:16:20.523      22:43:21 sma.sma_crypto -- sma/common.sh@20 -- # python
00:16:20.523    22:43:21 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "WnkPEJSLRjKmZoP9j+kgyQ==",
00:16:20.523  "nvmf": {
00:16:20.523    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:16:20.523    "discovery": {
00:16:20.523      "discovery_endpoints": [
00:16:20.523        {
00:16:20.523          "trtype": "tcp",
00:16:20.523          "traddr": "127.0.0.1",
00:16:20.523          "trsvcid": "8009"
00:16:20.523        }
00:16:20.523      ]
00:16:20.523    }
00:16:20.523  }'
00:16:20.523    22:43:21 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:16:20.523    22:43:21 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:16:20.523    22:43:21 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n 8 ]]
00:16:20.523    22:43:21 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:16:20.523     22:43:21 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher 8
00:16:20.523     22:43:21 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:16:20.523     22:43:21 sma.sma_crypto -- sma/common.sh@30 -- # echo 8
00:16:20.523    22:43:21 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:16:20.523     22:43:21 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:16:20.523     22:43:21 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:16:20.523      22:43:21 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:16:20.523    22:43:21 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:16:20.523     22:43:21 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:16:20.523    22:43:21 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:16:20.523    "cipher": 8,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:16:20.523  }'
00:16:20.523    22:43:21 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:16:20.523    22:43:21 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:16:20.523  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:20.523  I0000 00:00:1733867001.302866  171794 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:20.523  I0000 00:00:1733867001.304595  171794 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:20.782  I0000 00:00:1733867001.306040  171809 subchannel.cc:806] subchannel 0x5603930edde0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x560392f8d840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x560393107da0, grpc.internal.client_channel_call_destination=0x7f193755d390, grpc.internal.event_engine=0x560392e0c030, grpc.internal.security_connector=0x56039309f2b0, grpc.internal.subchannel_pool=0x560392f5c690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x560392c799a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:21.305528263+01:00"}), backing off for 1000 ms
00:16:21.718  Traceback (most recent call last):
00:16:21.718    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:16:21.718      main(sys.argv[1:])
00:16:21.718    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:16:21.718      result = client.call(request['method'], request.get('params', {}))
00:16:21.718               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:21.718    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:16:21.718      response = func(request=json_format.ParseDict(params, input()))
00:16:21.718                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:21.718    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:16:21.718      return _end_unary_response_blocking(state, call, False, None)
00:16:21.718             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:21.718    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:16:21.718      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:16:21.718      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:21.718  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:16:21.718  	status = StatusCode.INVALID_ARGUMENT
00:16:21.718  	details = "Invalid volume crypto configuration: bad cipher"
00:16:21.718  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Invalid volume crypto configuration: bad cipher", grpc_status:3, created_time:"2024-12-10T22:43:22.429092223+01:00"}"
00:16:21.718  >
00:16:21.718   22:43:22 sma.sma_crypto -- common/autotest_common.sh@655 -- # es=1
00:16:21.718   22:43:22 sma.sma_crypto -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:21.718   22:43:22 sma.sma_crypto -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:21.718   22:43:22 sma.sma_crypto -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:21.718    22:43:22 sma.sma_crypto -- sma/crypto.sh@248 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:16:21.718    22:43:22 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:21.718    22:43:22 sma.sma_crypto -- sma/crypto.sh@248 -- # jq -r '.[0].namespaces | length'
00:16:21.718    22:43:22 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:16:21.718    22:43:22 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:21.977   22:43:22 sma.sma_crypto -- sma/crypto.sh@248 -- # [[ 0 -eq 0 ]]
00:16:21.977    22:43:22 sma.sma_crypto -- sma/crypto.sh@249 -- # rpc_cmd bdev_nvme_get_discovery_info
00:16:21.977    22:43:22 sma.sma_crypto -- sma/crypto.sh@249 -- # jq -r '. | length'
00:16:21.977    22:43:22 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:21.977    22:43:22 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:16:21.977    22:43:22 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:21.977   22:43:22 sma.sma_crypto -- sma/crypto.sh@249 -- # [[ 0 -eq 0 ]]
00:16:21.977    22:43:22 sma.sma_crypto -- sma/crypto.sh@250 -- # jq -r length
00:16:21.977    22:43:22 sma.sma_crypto -- sma/crypto.sh@250 -- # rpc_cmd bdev_get_bdevs
00:16:21.977    22:43:22 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:21.977    22:43:22 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:16:21.977    22:43:22 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:21.977   22:43:22 sma.sma_crypto -- sma/crypto.sh@250 -- # [[ 0 -eq 0 ]]
00:16:21.977   22:43:22 sma.sma_crypto -- sma/crypto.sh@252 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:16:21.977   22:43:22 sma.sma_crypto -- sma/crypto.sh@94 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:22.236  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:22.236  I0000 00:00:1733867002.819328  172052 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:22.236  I0000 00:00:1733867002.821175  172052 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:22.237  I0000 00:00:1733867002.822486  172055 subchannel.cc:806] subchannel 0x563f59ccade0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x563f59b6a840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x563f59ce4da0, grpc.internal.client_channel_call_destination=0x7f5514826390, grpc.internal.event_engine=0x563f599e9030, grpc.internal.security_connector=0x563f59b72770, grpc.internal.subchannel_pool=0x563f59b39690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x563f598569a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:22.822019124+01:00"}), backing off for 1000 ms
00:16:22.237  {}
00:16:22.237    22:43:22 sma.sma_crypto -- sma/crypto.sh@255 -- # create_device 5a790f10-948b-4632-a666-83fd8fe920c9 AES_CBC 1234567890abcdef1234567890abcdef
00:16:22.237    22:43:22 sma.sma_crypto -- sma/crypto.sh@255 -- # jq -r .handle
00:16:22.237    22:43:22 sma.sma_crypto -- sma/crypto.sh@77 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:22.237     22:43:22 sma.sma_crypto -- sma/crypto.sh@77 -- # gen_volume_params 5a790f10-948b-4632-a666-83fd8fe920c9 AES_CBC 1234567890abcdef1234567890abcdef
00:16:22.237     22:43:22 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=5a790f10-948b-4632-a666-83fd8fe920c9 cipher=AES_CBC key=1234567890abcdef1234567890abcdef key2= config
00:16:22.237     22:43:22 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:16:22.237      22:43:22 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:16:22.237       22:43:22 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 5a790f10-948b-4632-a666-83fd8fe920c9
00:16:22.237       22:43:22 sma.sma_crypto -- sma/common.sh@20 -- # python
00:16:22.237     22:43:22 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "WnkPEJSLRjKmZoP9j+kgyQ==",
00:16:22.237  "nvmf": {
00:16:22.237    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:16:22.237    "discovery": {
00:16:22.237      "discovery_endpoints": [
00:16:22.237        {
00:16:22.237          "trtype": "tcp",
00:16:22.237          "traddr": "127.0.0.1",
00:16:22.237          "trsvcid": "8009"
00:16:22.237        }
00:16:22.237      ]
00:16:22.237    }
00:16:22.237  }'
00:16:22.237     22:43:22 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:16:22.237     22:43:22 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:16:22.237     22:43:22 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_CBC ]]
00:16:22.237     22:43:22 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:16:22.237      22:43:22 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_CBC
00:16:22.237      22:43:22 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:16:22.237      22:43:22 sma.sma_crypto -- sma/common.sh@28 -- # echo 0
00:16:22.237     22:43:22 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:16:22.237      22:43:22 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:16:22.237      22:43:22 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/63
00:16:22.237       22:43:22 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:16:22.237     22:43:22 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:16:22.237      22:43:22 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:16:22.237     22:43:22 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:16:22.237    "cipher": 0,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:16:22.237  }'
00:16:22.237     22:43:22 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:16:22.237     22:43:22 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:16:22.495  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:22.495  I0000 00:00:1733867003.174646  172101 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:22.495  I0000 00:00:1733867003.176302  172101 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:22.495  I0000 00:00:1733867003.177920  172283 subchannel.cc:806] subchannel 0x557d6a21ede0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x557d6a0be840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x557d6a238da0, grpc.internal.client_channel_call_destination=0x7f428b463390, grpc.internal.event_engine=0x557d6a1d03d0, grpc.internal.security_connector=0x557d6a1d0390, grpc.internal.subchannel_pool=0x557d6a1d01b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x557d69f8e570, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:23.17737473+01:00"}), backing off for 1000 ms
00:16:23.871  [2024-12-10 22:43:24.303709] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:16:23.871   22:43:24 sma.sma_crypto -- sma/crypto.sh@255 -- # device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:16:23.871   22:43:24 sma.sma_crypto -- sma/crypto.sh@256 -- # verify_crypto_volume nqn.2016-06.io.spdk:cnode0 5a790f10-948b-4632-a666-83fd8fe920c9
00:16:23.871   22:43:24 sma.sma_crypto -- sma/crypto.sh@132 -- # local nqn=nqn.2016-06.io.spdk:cnode0 uuid=5a790f10-948b-4632-a666-83fd8fe920c9 ns ns_bdev
00:16:23.871    22:43:24 sma.sma_crypto -- sma/crypto.sh@134 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:16:23.871    22:43:24 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:23.871    22:43:24 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:16:23.871    22:43:24 sma.sma_crypto -- sma/crypto.sh@134 -- # jq -r '.[0].namespaces[0]'
00:16:23.871    22:43:24 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:23.871   22:43:24 sma.sma_crypto -- sma/crypto.sh@134 -- # ns='{
00:16:23.871    "nsid": 1,
00:16:23.871    "bdev_name": "ae6eb4a8-aa07-49f5-bb3f-8d9489eca176",
00:16:23.871    "name": "ae6eb4a8-aa07-49f5-bb3f-8d9489eca176",
00:16:23.871    "nguid": "5A790F10948B4632A66683FD8FE920C9",
00:16:23.871    "uuid": "5a790f10-948b-4632-a666-83fd8fe920c9"
00:16:23.871  }'
00:16:23.871    22:43:24 sma.sma_crypto -- sma/crypto.sh@135 -- # jq -r .name
00:16:23.871   22:43:24 sma.sma_crypto -- sma/crypto.sh@135 -- # ns_bdev=ae6eb4a8-aa07-49f5-bb3f-8d9489eca176
00:16:23.871    22:43:24 sma.sma_crypto -- sma/crypto.sh@138 -- # rpc_cmd bdev_get_bdevs -b ae6eb4a8-aa07-49f5-bb3f-8d9489eca176
00:16:23.872    22:43:24 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:23.872    22:43:24 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:16:23.872    22:43:24 sma.sma_crypto -- sma/crypto.sh@138 -- # jq -r '.[0].product_name'
00:16:23.872    22:43:24 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:23.872   22:43:24 sma.sma_crypto -- sma/crypto.sh@138 -- # [[ crypto == crypto ]]
00:16:23.872    22:43:24 sma.sma_crypto -- sma/crypto.sh@139 -- # rpc_cmd bdev_get_bdevs
00:16:23.872    22:43:24 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:23.872    22:43:24 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:16:23.872    22:43:24 sma.sma_crypto -- sma/crypto.sh@139 -- # jq -r '[.[] | select(.product_name == "crypto")] | length'
00:16:23.872    22:43:24 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:23.872   22:43:24 sma.sma_crypto -- sma/crypto.sh@139 -- # [[ 1 -eq 1 ]]
00:16:23.872    22:43:24 sma.sma_crypto -- sma/crypto.sh@141 -- # jq -r .uuid
00:16:23.872   22:43:24 sma.sma_crypto -- sma/crypto.sh@141 -- # [[ 5a790f10-948b-4632-a666-83fd8fe920c9 == \5\a\7\9\0\f\1\0\-\9\4\8\b\-\4\6\3\2\-\a\6\6\6\-\8\3\f\d\8\f\e\9\2\0\c\9 ]]
00:16:23.872    22:43:24 sma.sma_crypto -- sma/crypto.sh@142 -- # jq -r .nguid
00:16:23.872    22:43:24 sma.sma_crypto -- sma/crypto.sh@142 -- # uuid2nguid 5a790f10-948b-4632-a666-83fd8fe920c9
00:16:23.872    22:43:24 sma.sma_crypto -- sma/common.sh@40 -- # local uuid=5A790F10-948B-4632-A666-83FD8FE920C9
00:16:23.872    22:43:24 sma.sma_crypto -- sma/common.sh@41 -- # echo 5A790F10948B4632A66683FD8FE920C9
00:16:23.872   22:43:24 sma.sma_crypto -- sma/crypto.sh@142 -- # [[ 5A790F10948B4632A66683FD8FE920C9 == \5\A\7\9\0\F\1\0\9\4\8\B\4\6\3\2\A\6\6\6\8\3\F\D\8\F\E\9\2\0\C\9 ]]
00:16:23.872   22:43:24 sma.sma_crypto -- sma/crypto.sh@258 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 5a790f10-948b-4632-a666-83fd8fe920c9
00:16:23.872   22:43:24 sma.sma_crypto -- sma/crypto.sh@120 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:23.872    22:43:24 sma.sma_crypto -- sma/crypto.sh@120 -- # uuid2base64 5a790f10-948b-4632-a666-83fd8fe920c9
00:16:23.872    22:43:24 sma.sma_crypto -- sma/common.sh@20 -- # python
00:16:24.130  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:24.130  I0000 00:00:1733867004.818935  172534 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:24.130  I0000 00:00:1733867004.820780  172534 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:24.130  I0000 00:00:1733867004.822075  172540 subchannel.cc:806] subchannel 0x56272c0b5de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x56272bf55840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x56272c0cfda0, grpc.internal.client_channel_call_destination=0x7fd62cbb3390, grpc.internal.event_engine=0x56272bf42490, grpc.internal.security_connector=0x56272c0672b0, grpc.internal.subchannel_pool=0x56272bf24690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x56272bc419a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:24.821617335+01:00"}), backing off for 1000 ms
00:16:24.130  {}
00:16:24.130   22:43:24 sma.sma_crypto -- sma/crypto.sh@259 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:16:24.130   22:43:24 sma.sma_crypto -- sma/crypto.sh@94 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:24.389  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:24.389  I0000 00:00:1733867005.097547  172560 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:24.389  I0000 00:00:1733867005.099254  172560 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:24.389  I0000 00:00:1733867005.100460  172561 subchannel.cc:806] subchannel 0x561c416fbde0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x561c4159b840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x561c41715da0, grpc.internal.client_channel_call_destination=0x7fb78acc5390, grpc.internal.event_engine=0x561c4141a030, grpc.internal.security_connector=0x561c415a3770, grpc.internal.subchannel_pool=0x561c4156a690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x561c412879a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:25.100033142+01:00"}), backing off for 1000 ms
00:16:24.389  {}
00:16:24.389   22:43:25 sma.sma_crypto -- sma/crypto.sh@263 -- # NOT create_device 5a790f10-948b-4632-a666-83fd8fe920c9 8 1234567890abcdef1234567890abcdef
00:16:24.389   22:43:25 sma.sma_crypto -- common/autotest_common.sh@652 -- # local es=0
00:16:24.389   22:43:25 sma.sma_crypto -- common/autotest_common.sh@654 -- # valid_exec_arg create_device 5a790f10-948b-4632-a666-83fd8fe920c9 8 1234567890abcdef1234567890abcdef
00:16:24.389   22:43:25 sma.sma_crypto -- common/autotest_common.sh@640 -- # local arg=create_device
00:16:24.389   22:43:25 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:24.389    22:43:25 sma.sma_crypto -- common/autotest_common.sh@644 -- # type -t create_device
00:16:24.389   22:43:25 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:24.389   22:43:25 sma.sma_crypto -- common/autotest_common.sh@655 -- # create_device 5a790f10-948b-4632-a666-83fd8fe920c9 8 1234567890abcdef1234567890abcdef
00:16:24.389   22:43:25 sma.sma_crypto -- sma/crypto.sh@77 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:24.389    22:43:25 sma.sma_crypto -- sma/crypto.sh@77 -- # gen_volume_params 5a790f10-948b-4632-a666-83fd8fe920c9 8 1234567890abcdef1234567890abcdef
00:16:24.389    22:43:25 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=5a790f10-948b-4632-a666-83fd8fe920c9 cipher=8 key=1234567890abcdef1234567890abcdef key2= config
00:16:24.389    22:43:25 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:16:24.389     22:43:25 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:16:24.389      22:43:25 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 5a790f10-948b-4632-a666-83fd8fe920c9
00:16:24.389      22:43:25 sma.sma_crypto -- sma/common.sh@20 -- # python
00:16:24.648    22:43:25 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "WnkPEJSLRjKmZoP9j+kgyQ==",
00:16:24.648  "nvmf": {
00:16:24.648    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:16:24.648    "discovery": {
00:16:24.648      "discovery_endpoints": [
00:16:24.648        {
00:16:24.648          "trtype": "tcp",
00:16:24.648          "traddr": "127.0.0.1",
00:16:24.648          "trsvcid": "8009"
00:16:24.648        }
00:16:24.648      ]
00:16:24.648    }
00:16:24.648  }'
00:16:24.648    22:43:25 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:16:24.648    22:43:25 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:16:24.648    22:43:25 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n 8 ]]
00:16:24.648    22:43:25 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:16:24.648     22:43:25 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher 8
00:16:24.648     22:43:25 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:16:24.648     22:43:25 sma.sma_crypto -- sma/common.sh@30 -- # echo 8
00:16:24.648    22:43:25 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:16:24.648     22:43:25 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:16:24.648     22:43:25 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:16:24.648      22:43:25 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:16:24.648    22:43:25 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:16:24.648     22:43:25 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:16:24.648    22:43:25 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:16:24.648    "cipher": 8,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:16:24.648  }'
00:16:24.648    22:43:25 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:16:24.648    22:43:25 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:16:24.906  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:24.906  I0000 00:00:1733867005.437865  172582 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:24.906  I0000 00:00:1733867005.439349  172582 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:24.906  I0000 00:00:1733867005.440958  172793 subchannel.cc:806] subchannel 0x55e6c911bde0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55e6c8fbb840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55e6c9135da0, grpc.internal.client_channel_call_destination=0x7f73c83e2390, grpc.internal.event_engine=0x55e6c90cd3d0, grpc.internal.security_connector=0x55e6c90cd390, grpc.internal.subchannel_pool=0x55e6c90cd1b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55e6c8e8b570, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:25.440352835+01:00"}), backing off for 1000 ms
00:16:25.841  Traceback (most recent call last):
00:16:25.841    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:16:25.841      main(sys.argv[1:])
00:16:25.841    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:16:25.841      result = client.call(request['method'], request.get('params', {}))
00:16:25.841               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:25.841    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:16:25.841      response = func(request=json_format.ParseDict(params, input()))
00:16:25.841                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:25.841    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:16:25.841      return _end_unary_response_blocking(state, call, False, None)
00:16:25.841             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:25.841    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:16:25.841      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:16:25.841      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:25.841  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:16:25.841  	status = StatusCode.INVALID_ARGUMENT
00:16:25.841  	details = "Invalid volume crypto configuration: bad cipher"
00:16:25.841  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-12-10T22:43:26.552088992+01:00", grpc_status:3, grpc_message:"Invalid volume crypto configuration: bad cipher"}"
00:16:25.841  >
00:16:25.841   22:43:26 sma.sma_crypto -- common/autotest_common.sh@655 -- # es=1
00:16:25.841   22:43:26 sma.sma_crypto -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:25.841   22:43:26 sma.sma_crypto -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:25.841   22:43:26 sma.sma_crypto -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:25.841    22:43:26 sma.sma_crypto -- sma/crypto.sh@264 -- # rpc_cmd bdev_nvme_get_discovery_info
00:16:25.841    22:43:26 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:25.841    22:43:26 sma.sma_crypto -- sma/crypto.sh@264 -- # jq -r '. | length'
00:16:25.841    22:43:26 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:16:25.841    22:43:26 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:26.099   22:43:26 sma.sma_crypto -- sma/crypto.sh@264 -- # [[ 0 -eq 0 ]]
00:16:26.099    22:43:26 sma.sma_crypto -- sma/crypto.sh@265 -- # jq -r length
00:16:26.100    22:43:26 sma.sma_crypto -- sma/crypto.sh@265 -- # rpc_cmd bdev_get_bdevs
00:16:26.100    22:43:26 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:26.100    22:43:26 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:16:26.100    22:43:26 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:26.100   22:43:26 sma.sma_crypto -- sma/crypto.sh@265 -- # [[ 0 -eq 0 ]]
00:16:26.100    22:43:26 sma.sma_crypto -- sma/crypto.sh@266 -- # rpc_cmd nvmf_get_subsystems
00:16:26.100    22:43:26 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:26.100    22:43:26 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:16:26.100    22:43:26 sma.sma_crypto -- sma/crypto.sh@266 -- # jq -r '[.[] | select(.nqn == "nqn.2016-06.io.spdk:cnode0")] | length'
00:16:26.100    22:43:26 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:26.100   22:43:26 sma.sma_crypto -- sma/crypto.sh@266 -- # [[ 0 -eq 0 ]]
00:16:26.100   22:43:26 sma.sma_crypto -- sma/crypto.sh@269 -- # killprocess 169909
00:16:26.100   22:43:26 sma.sma_crypto -- common/autotest_common.sh@954 -- # '[' -z 169909 ']'
00:16:26.100   22:43:26 sma.sma_crypto -- common/autotest_common.sh@958 -- # kill -0 169909
00:16:26.100    22:43:26 sma.sma_crypto -- common/autotest_common.sh@959 -- # uname
00:16:26.100   22:43:26 sma.sma_crypto -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:16:26.100    22:43:26 sma.sma_crypto -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 169909
00:16:26.100   22:43:26 sma.sma_crypto -- common/autotest_common.sh@960 -- # process_name=python3
00:16:26.100   22:43:26 sma.sma_crypto -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:16:26.100   22:43:26 sma.sma_crypto -- common/autotest_common.sh@972 -- # echo 'killing process with pid 169909'
00:16:26.100  killing process with pid 169909
00:16:26.100   22:43:26 sma.sma_crypto -- common/autotest_common.sh@973 -- # kill 169909
00:16:26.100   22:43:26 sma.sma_crypto -- common/autotest_common.sh@978 -- # wait 169909
00:16:26.100   22:43:26 sma.sma_crypto -- sma/crypto.sh@278 -- # smapid=173036
00:16:26.100   22:43:26 sma.sma_crypto -- sma/crypto.sh@280 -- # sma_waitforlisten
00:16:26.100   22:43:26 sma.sma_crypto -- sma/crypto.sh@270 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:16:26.100   22:43:26 sma.sma_crypto -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:16:26.100   22:43:26 sma.sma_crypto -- sma/common.sh@8 -- # local sma_port=8080
00:16:26.100    22:43:26 sma.sma_crypto -- sma/crypto.sh@270 -- # cat
00:16:26.100   22:43:26 sma.sma_crypto -- sma/common.sh@10 -- # (( i = 0 ))
00:16:26.100   22:43:26 sma.sma_crypto -- sma/common.sh@10 -- # (( i < 5 ))
00:16:26.100   22:43:26 sma.sma_crypto -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:16:26.100   22:43:26 sma.sma_crypto -- sma/common.sh@14 -- # sleep 1s
00:16:26.358  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:26.358  I0000 00:00:1733867007.022132  173036 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:27.293   22:43:27 sma.sma_crypto -- sma/common.sh@10 -- # (( i++ ))
00:16:27.293   22:43:27 sma.sma_crypto -- sma/common.sh@10 -- # (( i < 5 ))
00:16:27.293   22:43:27 sma.sma_crypto -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:16:27.293   22:43:27 sma.sma_crypto -- sma/common.sh@12 -- # return 0
00:16:27.293    22:43:27 sma.sma_crypto -- sma/crypto.sh@281 -- # create_device
00:16:27.293    22:43:27 sma.sma_crypto -- sma/crypto.sh@281 -- # jq -r .handle
00:16:27.294    22:43:27 sma.sma_crypto -- sma/crypto.sh@77 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:27.294  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:27.294  I0000 00:00:1733867008.059947  173274 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:27.294  I0000 00:00:1733867008.061813  173274 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:27.294  I0000 00:00:1733867008.063198  173276 subchannel.cc:806] subchannel 0x55f908694de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55f908534840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55f9086aeda0, grpc.internal.client_channel_call_destination=0x7fe0ef3b3390, grpc.internal.event_engine=0x55f908521490, grpc.internal.security_connector=0x55f9086462b0, grpc.internal.subchannel_pool=0x55f908503690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55f9082209a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:28.062640433+01:00"}), backing off for 1000 ms
00:16:27.552  [2024-12-10 22:43:28.084808] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:16:27.552   22:43:28 sma.sma_crypto -- sma/crypto.sh@281 -- # device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:16:27.552   22:43:28 sma.sma_crypto -- sma/crypto.sh@283 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 5a790f10-948b-4632-a666-83fd8fe920c9 AES_CBC 1234567890abcdef1234567890abcdef
00:16:27.552   22:43:28 sma.sma_crypto -- common/autotest_common.sh@652 -- # local es=0
00:16:27.552   22:43:28 sma.sma_crypto -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 5a790f10-948b-4632-a666-83fd8fe920c9 AES_CBC 1234567890abcdef1234567890abcdef
00:16:27.552   22:43:28 sma.sma_crypto -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:16:27.552   22:43:28 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:27.552    22:43:28 sma.sma_crypto -- common/autotest_common.sh@644 -- # type -t attach_volume
00:16:27.552   22:43:28 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:27.552   22:43:28 sma.sma_crypto -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 5a790f10-948b-4632-a666-83fd8fe920c9 AES_CBC 1234567890abcdef1234567890abcdef
00:16:27.552   22:43:28 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:16:27.552   22:43:28 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:16:27.552   22:43:28 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:27.552    22:43:28 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 5a790f10-948b-4632-a666-83fd8fe920c9 AES_CBC 1234567890abcdef1234567890abcdef
00:16:27.552    22:43:28 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=5a790f10-948b-4632-a666-83fd8fe920c9 cipher=AES_CBC key=1234567890abcdef1234567890abcdef key2= config
00:16:27.552    22:43:28 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:16:27.552     22:43:28 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:16:27.552      22:43:28 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 5a790f10-948b-4632-a666-83fd8fe920c9
00:16:27.552      22:43:28 sma.sma_crypto -- sma/common.sh@20 -- # python
00:16:27.552    22:43:28 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "WnkPEJSLRjKmZoP9j+kgyQ==",
00:16:27.552  "nvmf": {
00:16:27.552    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:16:27.552    "discovery": {
00:16:27.552      "discovery_endpoints": [
00:16:27.552        {
00:16:27.552          "trtype": "tcp",
00:16:27.552          "traddr": "127.0.0.1",
00:16:27.552          "trsvcid": "8009"
00:16:27.552        }
00:16:27.552      ]
00:16:27.552    }
00:16:27.552  }'
00:16:27.552    22:43:28 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:16:27.552    22:43:28 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:16:27.552    22:43:28 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_CBC ]]
00:16:27.552    22:43:28 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:16:27.552     22:43:28 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_CBC
00:16:27.552     22:43:28 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:16:27.552     22:43:28 sma.sma_crypto -- sma/common.sh@28 -- # echo 0
00:16:27.553    22:43:28 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:16:27.553     22:43:28 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:16:27.553     22:43:28 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:16:27.553      22:43:28 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:16:27.553    22:43:28 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:16:27.553     22:43:28 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:16:27.553    22:43:28 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:16:27.553    "cipher": 0,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:16:27.553  }'
00:16:27.553    22:43:28 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:16:27.553    22:43:28 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:16:27.812  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:27.812  I0000 00:00:1733867008.358950  173298 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:27.812  I0000 00:00:1733867008.360790  173298 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:27.812  I0000 00:00:1733867008.362176  173312 subchannel.cc:806] subchannel 0x5647af461de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5647af301840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5647af47bda0, grpc.internal.client_channel_call_destination=0x7fbca2148390, grpc.internal.event_engine=0x5647af180030, grpc.internal.security_connector=0x5647af4132b0, grpc.internal.subchannel_pool=0x5647af2d0690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5647aefed9a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:28.361698353+01:00"}), backing off for 999 ms
00:16:28.750  Traceback (most recent call last):
00:16:28.750    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:16:28.750      main(sys.argv[1:])
00:16:28.750    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:16:28.750      result = client.call(request['method'], request.get('params', {}))
00:16:28.750               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:28.750    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:16:28.750      response = func(request=json_format.ParseDict(params, input()))
00:16:28.750                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:28.750    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:16:28.750      return _end_unary_response_blocking(state, call, False, None)
00:16:28.750             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:28.750    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:16:28.750      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:16:28.750      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:28.750  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:16:28.750  	status = StatusCode.INVALID_ARGUMENT
00:16:28.750  	details = "Crypto is disabled"
00:16:28.750  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Crypto is disabled", grpc_status:3, created_time:"2024-12-10T22:43:29.475727331+01:00"}"
00:16:28.750  >
00:16:28.750   22:43:29 sma.sma_crypto -- common/autotest_common.sh@655 -- # es=1
00:16:28.750   22:43:29 sma.sma_crypto -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:28.750   22:43:29 sma.sma_crypto -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:28.750   22:43:29 sma.sma_crypto -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:28.750    22:43:29 sma.sma_crypto -- sma/crypto.sh@284 -- # jq -r '. | length'
00:16:28.750    22:43:29 sma.sma_crypto -- sma/crypto.sh@284 -- # rpc_cmd bdev_nvme_get_discovery_info
00:16:28.750    22:43:29 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:28.750    22:43:29 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:16:29.016    22:43:29 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:29.016   22:43:29 sma.sma_crypto -- sma/crypto.sh@284 -- # [[ 0 -eq 0 ]]
00:16:29.016    22:43:29 sma.sma_crypto -- sma/crypto.sh@285 -- # rpc_cmd bdev_get_bdevs
00:16:29.016    22:43:29 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:29.016    22:43:29 sma.sma_crypto -- sma/crypto.sh@285 -- # jq -r length
00:16:29.016    22:43:29 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:16:29.016    22:43:29 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:29.016   22:43:29 sma.sma_crypto -- sma/crypto.sh@285 -- # [[ 0 -eq 0 ]]
00:16:29.016   22:43:29 sma.sma_crypto -- sma/crypto.sh@287 -- # cleanup
00:16:29.016   22:43:29 sma.sma_crypto -- sma/crypto.sh@22 -- # killprocess 173036
00:16:29.016   22:43:29 sma.sma_crypto -- common/autotest_common.sh@954 -- # '[' -z 173036 ']'
00:16:29.016   22:43:29 sma.sma_crypto -- common/autotest_common.sh@958 -- # kill -0 173036
00:16:29.016    22:43:29 sma.sma_crypto -- common/autotest_common.sh@959 -- # uname
00:16:29.016   22:43:29 sma.sma_crypto -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:16:29.016    22:43:29 sma.sma_crypto -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 173036
00:16:29.016   22:43:29 sma.sma_crypto -- common/autotest_common.sh@960 -- # process_name=python3
00:16:29.016   22:43:29 sma.sma_crypto -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:16:29.016   22:43:29 sma.sma_crypto -- common/autotest_common.sh@972 -- # echo 'killing process with pid 173036'
00:16:29.016  killing process with pid 173036
00:16:29.016   22:43:29 sma.sma_crypto -- common/autotest_common.sh@973 -- # kill 173036
00:16:29.016   22:43:29 sma.sma_crypto -- common/autotest_common.sh@978 -- # wait 173036
00:16:29.016   22:43:29 sma.sma_crypto -- sma/crypto.sh@23 -- # killprocess 169288
00:16:29.016   22:43:29 sma.sma_crypto -- common/autotest_common.sh@954 -- # '[' -z 169288 ']'
00:16:29.016   22:43:29 sma.sma_crypto -- common/autotest_common.sh@958 -- # kill -0 169288
00:16:29.016    22:43:29 sma.sma_crypto -- common/autotest_common.sh@959 -- # uname
00:16:29.016   22:43:29 sma.sma_crypto -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:16:29.016    22:43:29 sma.sma_crypto -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 169288
00:16:29.016   22:43:29 sma.sma_crypto -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:16:29.016   22:43:29 sma.sma_crypto -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:16:29.016   22:43:29 sma.sma_crypto -- common/autotest_common.sh@972 -- # echo 'killing process with pid 169288'
00:16:29.016  killing process with pid 169288
00:16:29.016   22:43:29 sma.sma_crypto -- common/autotest_common.sh@973 -- # kill 169288
00:16:29.016   22:43:29 sma.sma_crypto -- common/autotest_common.sh@978 -- # wait 169288
00:16:31.551   22:43:32 sma.sma_crypto -- sma/crypto.sh@24 -- # killprocess 169908
00:16:31.551   22:43:32 sma.sma_crypto -- common/autotest_common.sh@954 -- # '[' -z 169908 ']'
00:16:31.551   22:43:32 sma.sma_crypto -- common/autotest_common.sh@958 -- # kill -0 169908
00:16:31.551    22:43:32 sma.sma_crypto -- common/autotest_common.sh@959 -- # uname
00:16:31.551   22:43:32 sma.sma_crypto -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:16:31.551    22:43:32 sma.sma_crypto -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 169908
00:16:31.551   22:43:32 sma.sma_crypto -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:16:31.551   22:43:32 sma.sma_crypto -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:16:31.551   22:43:32 sma.sma_crypto -- common/autotest_common.sh@972 -- # echo 'killing process with pid 169908'
00:16:31.551  killing process with pid 169908
00:16:31.551   22:43:32 sma.sma_crypto -- common/autotest_common.sh@973 -- # kill 169908
00:16:31.551   22:43:32 sma.sma_crypto -- common/autotest_common.sh@978 -- # wait 169908
00:16:34.086   22:43:34 sma.sma_crypto -- sma/crypto.sh@288 -- # trap - SIGINT SIGTERM EXIT
00:16:34.086  
00:16:34.086  real	0m25.279s
00:16:34.086  user	0m51.568s
00:16:34.086  sys	0m2.983s
00:16:34.086   22:43:34 sma.sma_crypto -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:34.086   22:43:34 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:16:34.086  ************************************
00:16:34.086  END TEST sma_crypto
00:16:34.086  ************************************
00:16:34.345   22:43:34 sma -- sma/sma.sh@17 -- # run_test sma_qos /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/qos.sh
00:16:34.345   22:43:34 sma -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:16:34.345   22:43:34 sma -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:34.345   22:43:34 sma -- common/autotest_common.sh@10 -- # set +x
00:16:34.345  ************************************
00:16:34.345  START TEST sma_qos
00:16:34.345  ************************************
00:16:34.345   22:43:34 sma.sma_qos -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/qos.sh
00:16:34.345  * Looking for test storage...
00:16:34.345  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:16:34.345    22:43:34 sma.sma_qos -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:16:34.345     22:43:34 sma.sma_qos -- common/autotest_common.sh@1711 -- # lcov --version
00:16:34.345     22:43:34 sma.sma_qos -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:16:34.345    22:43:34 sma.sma_qos -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:16:34.345    22:43:34 sma.sma_qos -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:16:34.345    22:43:34 sma.sma_qos -- scripts/common.sh@333 -- # local ver1 ver1_l
00:16:34.345    22:43:34 sma.sma_qos -- scripts/common.sh@334 -- # local ver2 ver2_l
00:16:34.345    22:43:34 sma.sma_qos -- scripts/common.sh@336 -- # IFS=.-:
00:16:34.345    22:43:34 sma.sma_qos -- scripts/common.sh@336 -- # read -ra ver1
00:16:34.345    22:43:34 sma.sma_qos -- scripts/common.sh@337 -- # IFS=.-:
00:16:34.345    22:43:34 sma.sma_qos -- scripts/common.sh@337 -- # read -ra ver2
00:16:34.345    22:43:34 sma.sma_qos -- scripts/common.sh@338 -- # local 'op=<'
00:16:34.345    22:43:34 sma.sma_qos -- scripts/common.sh@340 -- # ver1_l=2
00:16:34.345    22:43:34 sma.sma_qos -- scripts/common.sh@341 -- # ver2_l=1
00:16:34.345    22:43:34 sma.sma_qos -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:16:34.345    22:43:34 sma.sma_qos -- scripts/common.sh@344 -- # case "$op" in
00:16:34.345    22:43:34 sma.sma_qos -- scripts/common.sh@345 -- # : 1
00:16:34.345    22:43:34 sma.sma_qos -- scripts/common.sh@364 -- # (( v = 0 ))
00:16:34.345    22:43:34 sma.sma_qos -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:16:34.345     22:43:34 sma.sma_qos -- scripts/common.sh@365 -- # decimal 1
00:16:34.345     22:43:34 sma.sma_qos -- scripts/common.sh@353 -- # local d=1
00:16:34.345     22:43:34 sma.sma_qos -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:16:34.345     22:43:34 sma.sma_qos -- scripts/common.sh@355 -- # echo 1
00:16:34.345    22:43:34 sma.sma_qos -- scripts/common.sh@365 -- # ver1[v]=1
00:16:34.345     22:43:34 sma.sma_qos -- scripts/common.sh@366 -- # decimal 2
00:16:34.345     22:43:34 sma.sma_qos -- scripts/common.sh@353 -- # local d=2
00:16:34.345     22:43:34 sma.sma_qos -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:16:34.345     22:43:34 sma.sma_qos -- scripts/common.sh@355 -- # echo 2
00:16:34.345    22:43:34 sma.sma_qos -- scripts/common.sh@366 -- # ver2[v]=2
00:16:34.345    22:43:34 sma.sma_qos -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:16:34.345    22:43:34 sma.sma_qos -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:16:34.345    22:43:34 sma.sma_qos -- scripts/common.sh@368 -- # return 0
00:16:34.345    22:43:34 sma.sma_qos -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:16:34.345    22:43:34 sma.sma_qos -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:16:34.345  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:34.345  		--rc genhtml_branch_coverage=1
00:16:34.345  		--rc genhtml_function_coverage=1
00:16:34.345  		--rc genhtml_legend=1
00:16:34.345  		--rc geninfo_all_blocks=1
00:16:34.345  		--rc geninfo_unexecuted_blocks=1
00:16:34.345  		
00:16:34.345  		'
00:16:34.345    22:43:34 sma.sma_qos -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:16:34.345  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:34.345  		--rc genhtml_branch_coverage=1
00:16:34.345  		--rc genhtml_function_coverage=1
00:16:34.345  		--rc genhtml_legend=1
00:16:34.345  		--rc geninfo_all_blocks=1
00:16:34.345  		--rc geninfo_unexecuted_blocks=1
00:16:34.345  		
00:16:34.345  		'
00:16:34.345    22:43:34 sma.sma_qos -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:16:34.345  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:34.345  		--rc genhtml_branch_coverage=1
00:16:34.345  		--rc genhtml_function_coverage=1
00:16:34.345  		--rc genhtml_legend=1
00:16:34.345  		--rc geninfo_all_blocks=1
00:16:34.345  		--rc geninfo_unexecuted_blocks=1
00:16:34.346  		
00:16:34.346  		'
00:16:34.346    22:43:34 sma.sma_qos -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:16:34.346  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:34.346  		--rc genhtml_branch_coverage=1
00:16:34.346  		--rc genhtml_function_coverage=1
00:16:34.346  		--rc genhtml_legend=1
00:16:34.346  		--rc geninfo_all_blocks=1
00:16:34.346  		--rc geninfo_unexecuted_blocks=1
00:16:34.346  		
00:16:34.346  		'
00:16:34.346   22:43:34 sma.sma_qos -- sma/qos.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:16:34.346   22:43:34 sma.sma_qos -- sma/qos.sh@13 -- # smac=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:34.346   22:43:34 sma.sma_qos -- sma/qos.sh@15 -- # device_nvmf_tcp=3
00:16:34.346    22:43:35 sma.sma_qos -- sma/qos.sh@16 -- # printf %u -1
00:16:34.346   22:43:35 sma.sma_qos -- sma/qos.sh@16 -- # limit_reserved=18446744073709551615
00:16:34.346   22:43:35 sma.sma_qos -- sma/qos.sh@42 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:16:34.346   22:43:35 sma.sma_qos -- sma/qos.sh@44 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:16:34.346   22:43:35 sma.sma_qos -- sma/qos.sh@45 -- # tgtpid=174643
00:16:34.346   22:43:35 sma.sma_qos -- sma/qos.sh@55 -- # smapid=174644
00:16:34.346   22:43:35 sma.sma_qos -- sma/qos.sh@57 -- # sma_waitforlisten
00:16:34.346   22:43:35 sma.sma_qos -- sma/qos.sh@47 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:16:34.346   22:43:35 sma.sma_qos -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:16:34.346   22:43:35 sma.sma_qos -- sma/common.sh@8 -- # local sma_port=8080
00:16:34.346   22:43:35 sma.sma_qos -- sma/common.sh@10 -- # (( i = 0 ))
00:16:34.346    22:43:35 sma.sma_qos -- sma/qos.sh@47 -- # cat
00:16:34.346   22:43:35 sma.sma_qos -- sma/common.sh@10 -- # (( i < 5 ))
00:16:34.346   22:43:35 sma.sma_qos -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:16:34.346   22:43:35 sma.sma_qos -- sma/common.sh@14 -- # sleep 1s
00:16:34.346  [2024-12-10 22:43:35.097361] Starting SPDK v25.01-pre git sha1 626389917 / DPDK 24.03.0 initialization...
00:16:34.346  [2024-12-10 22:43:35.097480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174643 ]
00:16:34.604  EAL: No free 2048 kB hugepages reported on node 1
00:16:34.604  [2024-12-10 22:43:35.226010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:34.604  [2024-12-10 22:43:35.363427] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0
00:16:35.540   22:43:36 sma.sma_qos -- sma/common.sh@10 -- # (( i++ ))
00:16:35.540   22:43:36 sma.sma_qos -- sma/common.sh@10 -- # (( i < 5 ))
00:16:35.540   22:43:36 sma.sma_qos -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:16:35.540   22:43:36 sma.sma_qos -- sma/common.sh@14 -- # sleep 1s
00:16:35.799  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:35.799  I0000 00:00:1733867016.340923  174644 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:35.799  [2024-12-10 22:43:36.352346] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:16:36.366   22:43:37 sma.sma_qos -- sma/common.sh@10 -- # (( i++ ))
00:16:36.366   22:43:37 sma.sma_qos -- sma/common.sh@10 -- # (( i < 5 ))
00:16:36.366   22:43:37 sma.sma_qos -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:16:36.366   22:43:37 sma.sma_qos -- sma/common.sh@12 -- # return 0
00:16:36.366   22:43:37 sma.sma_qos -- sma/qos.sh@60 -- # rpc_cmd bdev_null_create null0 100 4096
00:16:36.366   22:43:37 sma.sma_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:36.366   22:43:37 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x
00:16:36.366  null0
00:16:36.366   22:43:37 sma.sma_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:36.366    22:43:37 sma.sma_qos -- sma/qos.sh@61 -- # rpc_cmd bdev_get_bdevs -b null0
00:16:36.366    22:43:37 sma.sma_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:36.366    22:43:37 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x
00:16:36.366    22:43:37 sma.sma_qos -- sma/qos.sh@61 -- # jq -r '.[].uuid'
00:16:36.366    22:43:37 sma.sma_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:36.366   22:43:37 sma.sma_qos -- sma/qos.sh@61 -- # uuid=34d35728-097b-480d-996b-c0c45b87ec18
00:16:36.366    22:43:37 sma.sma_qos -- sma/qos.sh@62 -- # jq -r .handle
00:16:36.366    22:43:37 sma.sma_qos -- sma/qos.sh@62 -- # create_device 34d35728-097b-480d-996b-c0c45b87ec18
00:16:36.366    22:43:37 sma.sma_qos -- sma/qos.sh@24 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:36.366     22:43:37 sma.sma_qos -- sma/qos.sh@24 -- # uuid2base64 34d35728-097b-480d-996b-c0c45b87ec18
00:16:36.366     22:43:37 sma.sma_qos -- sma/common.sh@20 -- # python
00:16:36.625  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:36.625  I0000 00:00:1733867017.397565  174929 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:36.625  I0000 00:00:1733867017.399353  174929 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:36.625  I0000 00:00:1733867017.400636  175094 subchannel.cc:806] subchannel 0x558095747de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5580955e7840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x558095761da0, grpc.internal.client_channel_call_destination=0x7fef0f469390, grpc.internal.event_engine=0x558095466030, grpc.internal.security_connector=0x5580956f92b0, grpc.internal.subchannel_pool=0x5580955b6690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5580952d39a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:37.400223312+01:00"}), backing off for 1000 ms
00:16:36.884  [2024-12-10 22:43:37.427962] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:16:36.885   22:43:37 sma.sma_qos -- sma/qos.sh@62 -- # device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:16:36.885   22:43:37 sma.sma_qos -- sma/qos.sh@65 -- # diff /dev/fd/62 /dev/fd/61
00:16:36.885    22:43:37 sma.sma_qos -- sma/qos.sh@65 -- # jq --sort-keys
00:16:36.885    22:43:37 sma.sma_qos -- sma/qos.sh@65 -- # get_qos_caps 3
00:16:36.885    22:43:37 sma.sma_qos -- sma/qos.sh@65 -- # jq --sort-keys
00:16:36.885    22:43:37 sma.sma_qos -- sma/common.sh@45 -- # local rootdir
00:16:36.885     22:43:37 sma.sma_qos -- sma/common.sh@47 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:16:36.885    22:43:37 sma.sma_qos -- sma/common.sh@47 -- # rootdir=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../..
00:16:36.885    22:43:37 sma.sma_qos -- sma/common.sh@49 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py
00:16:36.885  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:36.885  I0000 00:00:1733867017.665620  175123 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:36.885  I0000 00:00:1733867017.667612  175123 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:37.144  I0000 00:00:1733867017.669233  175132 subchannel.cc:806] subchannel 0x55bdfbe8efa0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55bdfbbcdcc0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55bdfbd209f0, grpc.internal.client_channel_call_destination=0x7efc66b0b390, grpc.internal.event_engine=0x55bdfbe73ec0, grpc.internal.security_connector=0x55bdfbc99030, grpc.internal.subchannel_pool=0x55bdfbe5f5b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55bdfbc35320, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:37.668488855+01:00"}), backing off for 1000 ms
00:16:37.144   22:43:37 sma.sma_qos -- sma/qos.sh@79 -- # NOT get_qos_caps 1234
00:16:37.144   22:43:37 sma.sma_qos -- common/autotest_common.sh@652 -- # local es=0
00:16:37.144   22:43:37 sma.sma_qos -- common/autotest_common.sh@654 -- # valid_exec_arg get_qos_caps 1234
00:16:37.144   22:43:37 sma.sma_qos -- common/autotest_common.sh@640 -- # local arg=get_qos_caps
00:16:37.144   22:43:37 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:37.144    22:43:37 sma.sma_qos -- common/autotest_common.sh@644 -- # type -t get_qos_caps
00:16:37.144   22:43:37 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:37.144   22:43:37 sma.sma_qos -- common/autotest_common.sh@655 -- # get_qos_caps 1234
00:16:37.144   22:43:37 sma.sma_qos -- sma/common.sh@45 -- # local rootdir
00:16:37.144    22:43:37 sma.sma_qos -- sma/common.sh@47 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:16:37.144   22:43:37 sma.sma_qos -- sma/common.sh@47 -- # rootdir=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../..
00:16:37.144   22:43:37 sma.sma_qos -- sma/common.sh@49 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py
00:16:37.144  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:37.144  I0000 00:00:1733867017.901931  175156 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:37.144  I0000 00:00:1733867017.903765  175156 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:37.144  I0000 00:00:1733867017.905023  175157 subchannel.cc:806] subchannel 0x559e25992fa0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x559e256d1cc0, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x559e258249f0, grpc.internal.client_channel_call_destination=0x7f7c9e0b1390, grpc.internal.event_engine=0x559e25977ec0, grpc.internal.security_connector=0x559e2579d030, grpc.internal.subchannel_pool=0x559e259635b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x559e25739320, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:37.904554067+01:00"}), backing off for 1000 ms
00:16:37.144  Traceback (most recent call last):
00:16:37.144    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py", line 74, in <module>
00:16:37.144      main(sys.argv[1:])
00:16:37.144    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py", line 69, in main
00:16:37.144      result = client.call(request['method'], request.get('params', {}))
00:16:37.144               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:37.144    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py", line 43, in call
00:16:37.144      response = func(request=json_format.ParseDict(params, input()))
00:16:37.144                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:37.144    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:16:37.144      return _end_unary_response_blocking(state, call, False, None)
00:16:37.144             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:37.144    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:16:37.144      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:16:37.144      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:37.144  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:16:37.144  	status = StatusCode.INVALID_ARGUMENT
00:16:37.144  	details = "Invalid device type"
00:16:37.144  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-12-10T22:43:37.906415929+01:00", grpc_status:3, grpc_message:"Invalid device type"}"
00:16:37.144  >
00:16:37.404   22:43:37 sma.sma_qos -- common/autotest_common.sh@655 -- # es=1
00:16:37.404   22:43:37 sma.sma_qos -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:37.404   22:43:37 sma.sma_qos -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:37.404   22:43:37 sma.sma_qos -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:37.404   22:43:37 sma.sma_qos -- sma/qos.sh@82 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:37.404    22:43:37 sma.sma_qos -- sma/qos.sh@82 -- # uuid2base64 34d35728-097b-480d-996b-c0c45b87ec18
00:16:37.404    22:43:37 sma.sma_qos -- sma/common.sh@20 -- # python
00:16:37.662  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:37.662  I0000 00:00:1733867018.216216  175177 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:37.662  I0000 00:00:1733867018.218102  175177 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:37.662  I0000 00:00:1733867018.219359  175180 subchannel.cc:806] subchannel 0x5571296a2de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x557129542840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5571296bcda0, grpc.internal.client_channel_call_destination=0x7fa10352b390, grpc.internal.event_engine=0x5571293c1030, grpc.internal.security_connector=0x5571296542b0, grpc.internal.subchannel_pool=0x557129511690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55712922e9a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:38.218909034+01:00"}), backing off for 1000 ms
00:16:37.662  {}
00:16:37.662    22:43:38 sma.sma_qos -- sma/qos.sh@94 -- # jq --sort-keys
00:16:37.662   22:43:38 sma.sma_qos -- sma/qos.sh@94 -- # diff /dev/fd/62 /dev/fd/61
00:16:37.662    22:43:38 sma.sma_qos -- sma/qos.sh@94 -- # rpc_cmd bdev_get_bdevs -b null0
00:16:37.662    22:43:38 sma.sma_qos -- sma/qos.sh@94 -- # jq --sort-keys '.[].assigned_rate_limits'
00:16:37.662    22:43:38 sma.sma_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:37.662    22:43:38 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x
00:16:37.662    22:43:38 sma.sma_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:37.662   22:43:38 sma.sma_qos -- sma/qos.sh@106 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:37.662    22:43:38 sma.sma_qos -- sma/qos.sh@106 -- # uuid2base64 34d35728-097b-480d-996b-c0c45b87ec18
00:16:37.662    22:43:38 sma.sma_qos -- sma/common.sh@20 -- # python
00:16:37.921  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:37.921  I0000 00:00:1733867018.601541  175210 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:37.921  I0000 00:00:1733867018.603431  175210 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:37.921  I0000 00:00:1733867018.604731  175411 subchannel.cc:806] subchannel 0x55fa464f3de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55fa46393840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55fa4650dda0, grpc.internal.client_channel_call_destination=0x7f1704b71390, grpc.internal.event_engine=0x55fa46212030, grpc.internal.security_connector=0x55fa464a52b0, grpc.internal.subchannel_pool=0x55fa46362690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55fa4607f9a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:38.604317634+01:00"}), backing off for 1000 ms
00:16:37.921  {}
00:16:37.921    22:43:38 sma.sma_qos -- sma/qos.sh@119 -- # jq --sort-keys
00:16:37.921   22:43:38 sma.sma_qos -- sma/qos.sh@119 -- # diff /dev/fd/62 /dev/fd/61
00:16:37.921    22:43:38 sma.sma_qos -- sma/qos.sh@119 -- # jq --sort-keys '.[].assigned_rate_limits'
00:16:37.921    22:43:38 sma.sma_qos -- sma/qos.sh@119 -- # rpc_cmd bdev_get_bdevs -b null0
00:16:37.921    22:43:38 sma.sma_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:37.921    22:43:38 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x
00:16:37.921    22:43:38 sma.sma_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:37.921   22:43:38 sma.sma_qos -- sma/qos.sh@131 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:37.921    22:43:38 sma.sma_qos -- sma/qos.sh@131 -- # uuid2base64 34d35728-097b-480d-996b-c0c45b87ec18
00:16:37.921    22:43:38 sma.sma_qos -- sma/common.sh@20 -- # python
00:16:38.180  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:38.180  I0000 00:00:1733867018.921365  175437 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:38.180  I0000 00:00:1733867018.923126  175437 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:38.180  I0000 00:00:1733867018.924381  175443 subchannel.cc:806] subchannel 0x5603b4315de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5603b41b5840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5603b432fda0, grpc.internal.client_channel_call_destination=0x7f9aa4408390, grpc.internal.event_engine=0x5603b4034030, grpc.internal.security_connector=0x5603b42c72b0, grpc.internal.subchannel_pool=0x5603b4184690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5603b3ea19a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:38.923950795+01:00"}), backing off for 1000 ms
00:16:38.180  {}
00:16:38.439    22:43:38 sma.sma_qos -- sma/qos.sh@145 -- # rpc_cmd bdev_get_bdevs -b null0
00:16:38.439    22:43:38 sma.sma_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:38.439    22:43:38 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x
00:16:38.439    22:43:38 sma.sma_qos -- sma/qos.sh@145 -- # jq --sort-keys '.[].assigned_rate_limits'
00:16:38.439   22:43:38 sma.sma_qos -- sma/qos.sh@145 -- # diff /dev/fd/62 /dev/fd/61
00:16:38.439    22:43:38 sma.sma_qos -- sma/qos.sh@145 -- # jq --sort-keys
00:16:38.439    22:43:38 sma.sma_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:38.439   22:43:39 sma.sma_qos -- sma/qos.sh@157 -- # unsupported_max_limits=(rd_iops wr_iops)
00:16:38.439   22:43:39 sma.sma_qos -- sma/qos.sh@159 -- # for limit in "${unsupported_max_limits[@]}"
00:16:38.439   22:43:39 sma.sma_qos -- sma/qos.sh@160 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:38.439    22:43:39 sma.sma_qos -- sma/qos.sh@160 -- # uuid2base64 34d35728-097b-480d-996b-c0c45b87ec18
00:16:38.439    22:43:39 sma.sma_qos -- sma/common.sh@20 -- # python
00:16:38.439   22:43:39 sma.sma_qos -- common/autotest_common.sh@652 -- # local es=0
00:16:38.439   22:43:39 sma.sma_qos -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:38.439   22:43:39 sma.sma_qos -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:38.439   22:43:39 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:38.439    22:43:39 sma.sma_qos -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:38.439   22:43:39 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:38.439    22:43:39 sma.sma_qos -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:38.439   22:43:39 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:38.439   22:43:39 sma.sma_qos -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:38.439   22:43:39 sma.sma_qos -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:16:38.439   22:43:39 sma.sma_qos -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:38.699  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:38.699  I0000 00:00:1733867019.257575  175473 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:38.699  I0000 00:00:1733867019.259183  175473 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:38.699  I0000 00:00:1733867019.260564  175477 subchannel.cc:806] subchannel 0x56006f7a9de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x56006f649840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x56006f7c3da0, grpc.internal.client_channel_call_destination=0x7f2ca5b08390, grpc.internal.event_engine=0x56006f636430, grpc.internal.security_connector=0x56006f75b2b0, grpc.internal.subchannel_pool=0x56006f618690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x56006f3359a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:39.260086058+01:00"}), backing off for 1000 ms
00:16:38.699  Traceback (most recent call last):
00:16:38.699    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:16:38.699      main(sys.argv[1:])
00:16:38.699    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:16:38.699      result = client.call(request['method'], request.get('params', {}))
00:16:38.699               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:38.699    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:16:38.699      response = func(request=json_format.ParseDict(params, input()))
00:16:38.699                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:38.699    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:16:38.699      return _end_unary_response_blocking(state, call, False, None)
00:16:38.699             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:38.699    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:16:38.699      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:16:38.699      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:38.699  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:16:38.699  	status = StatusCode.INVALID_ARGUMENT
00:16:38.699  	details = "Unsupported QoS limit: maximum.rd_iops"
00:16:38.699  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-12-10T22:43:39.277473773+01:00", grpc_status:3, grpc_message:"Unsupported QoS limit: maximum.rd_iops"}"
00:16:38.699  >
00:16:38.699   22:43:39 sma.sma_qos -- common/autotest_common.sh@655 -- # es=1
00:16:38.699   22:43:39 sma.sma_qos -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:38.699   22:43:39 sma.sma_qos -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:38.699   22:43:39 sma.sma_qos -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:38.699   22:43:39 sma.sma_qos -- sma/qos.sh@159 -- # for limit in "${unsupported_max_limits[@]}"
00:16:38.699   22:43:39 sma.sma_qos -- sma/qos.sh@160 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:38.699    22:43:39 sma.sma_qos -- sma/qos.sh@160 -- # uuid2base64 34d35728-097b-480d-996b-c0c45b87ec18
00:16:38.699    22:43:39 sma.sma_qos -- sma/common.sh@20 -- # python
00:16:38.699   22:43:39 sma.sma_qos -- common/autotest_common.sh@652 -- # local es=0
00:16:38.699   22:43:39 sma.sma_qos -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:38.699   22:43:39 sma.sma_qos -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:38.699   22:43:39 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:38.699    22:43:39 sma.sma_qos -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:38.699   22:43:39 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:38.699    22:43:39 sma.sma_qos -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:38.699   22:43:39 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:38.699   22:43:39 sma.sma_qos -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:38.699   22:43:39 sma.sma_qos -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:16:38.699   22:43:39 sma.sma_qos -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:38.957  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:38.957  I0000 00:00:1733867019.542317  175501 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:38.957  I0000 00:00:1733867019.544511  175501 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:38.958  I0000 00:00:1733867019.545858  175511 subchannel.cc:806] subchannel 0x559d3663ade0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x559d364da840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x559d36654da0, grpc.internal.client_channel_call_destination=0x7f3b481f5390, grpc.internal.event_engine=0x559d364c7430, grpc.internal.security_connector=0x559d365ec2b0, grpc.internal.subchannel_pool=0x559d364a9690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x559d361c69a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:39.545416338+01:00"}), backing off for 1000 ms
00:16:38.958  Traceback (most recent call last):
00:16:38.958    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:16:38.958      main(sys.argv[1:])
00:16:38.958    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:16:38.958      result = client.call(request['method'], request.get('params', {}))
00:16:38.958               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:38.958    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:16:38.958      response = func(request=json_format.ParseDict(params, input()))
00:16:38.958                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:38.958    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:16:38.958      return _end_unary_response_blocking(state, call, False, None)
00:16:38.958             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:38.958    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:16:38.958      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:16:38.958      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:38.958  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:16:38.958  	status = StatusCode.INVALID_ARGUMENT
00:16:38.958  	details = "Unsupported QoS limit: maximum.wr_iops"
00:16:38.958  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Unsupported QoS limit: maximum.wr_iops", grpc_status:3, created_time:"2024-12-10T22:43:39.562443116+01:00"}"
00:16:38.958  >
00:16:38.958   22:43:39 sma.sma_qos -- common/autotest_common.sh@655 -- # es=1
00:16:38.958   22:43:39 sma.sma_qos -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:38.958   22:43:39 sma.sma_qos -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:38.958   22:43:39 sma.sma_qos -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:38.958   22:43:39 sma.sma_qos -- sma/qos.sh@178 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:38.958    22:43:39 sma.sma_qos -- sma/qos.sh@178 -- # uuid2base64 34d35728-097b-480d-996b-c0c45b87ec18
00:16:38.958    22:43:39 sma.sma_qos -- sma/common.sh@20 -- # python
00:16:38.958   22:43:39 sma.sma_qos -- common/autotest_common.sh@652 -- # local es=0
00:16:38.958   22:43:39 sma.sma_qos -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:38.958   22:43:39 sma.sma_qos -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:38.958   22:43:39 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:38.958    22:43:39 sma.sma_qos -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:38.958   22:43:39 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:38.958    22:43:39 sma.sma_qos -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:38.958   22:43:39 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:38.958   22:43:39 sma.sma_qos -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:38.958   22:43:39 sma.sma_qos -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:16:38.958   22:43:39 sma.sma_qos -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:39.217  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:39.217  I0000 00:00:1733867019.821922  175627 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:39.217  I0000 00:00:1733867019.823575  175627 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:39.217  I0000 00:00:1733867019.824817  175728 subchannel.cc:806] subchannel 0x55eed86bcde0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55eed855c840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55eed86d6da0, grpc.internal.client_channel_call_destination=0x7f237fd64390, grpc.internal.event_engine=0x55eed83db030, grpc.internal.security_connector=0x55eed866e2b0, grpc.internal.subchannel_pool=0x55eed852b690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55eed82489a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:39.824379322+01:00"}), backing off for 1000 ms
00:16:39.217  [2024-12-10 22:43:39.834743] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:cnode0-invalid' does not exist
00:16:39.217  Traceback (most recent call last):
00:16:39.217    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:16:39.217      main(sys.argv[1:])
00:16:39.217    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:16:39.217      result = client.call(request['method'], request.get('params', {}))
00:16:39.217               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:39.217    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:16:39.217      response = func(request=json_format.ParseDict(params, input()))
00:16:39.217                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:39.217    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:16:39.217      return _end_unary_response_blocking(state, call, False, None)
00:16:39.217             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:39.217    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:16:39.217      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:16:39.217      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:39.217  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:16:39.217  	status = StatusCode.NOT_FOUND
00:16:39.217  	details = "No device associated with device_handle could be found"
00:16:39.217  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"No device associated with device_handle could be found", grpc_status:5, created_time:"2024-12-10T22:43:39.839061159+01:00"}"
00:16:39.217  >
00:16:39.217   22:43:39 sma.sma_qos -- common/autotest_common.sh@655 -- # es=1
00:16:39.217   22:43:39 sma.sma_qos -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:39.217   22:43:39 sma.sma_qos -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:39.217   22:43:39 sma.sma_qos -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:39.217   22:43:39 sma.sma_qos -- sma/qos.sh@191 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:39.217     22:43:39 sma.sma_qos -- sma/qos.sh@191 -- # uuidgen
00:16:39.217    22:43:39 sma.sma_qos -- sma/qos.sh@191 -- # uuid2base64 b9608808-5883-4bbf-96f5-ba47a8473645
00:16:39.217    22:43:39 sma.sma_qos -- sma/common.sh@20 -- # python
00:16:39.217   22:43:39 sma.sma_qos -- common/autotest_common.sh@652 -- # local es=0
00:16:39.217   22:43:39 sma.sma_qos -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:39.217   22:43:39 sma.sma_qos -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:39.217   22:43:39 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:39.217    22:43:39 sma.sma_qos -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:39.217   22:43:39 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:39.217    22:43:39 sma.sma_qos -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:39.217   22:43:39 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:39.217   22:43:39 sma.sma_qos -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:39.217   22:43:39 sma.sma_qos -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:16:39.217   22:43:39 sma.sma_qos -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:39.477  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:39.477  I0000 00:00:1733867020.118462  175754 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:39.477  I0000 00:00:1733867020.120733  175754 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:39.477  I0000 00:00:1733867020.122238  175755 subchannel.cc:806] subchannel 0x5563052b8de0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x556305158840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5563052d2da0, grpc.internal.client_channel_call_destination=0x7f8fea55b390, grpc.internal.event_engine=0x556304fd7030, grpc.internal.security_connector=0x55630526a2b0, grpc.internal.subchannel_pool=0x556305127690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x556304e449a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:40.121554392+01:00"}), backing off for 1000 ms
00:16:39.477  [2024-12-10 22:43:40.127663] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: b9608808-5883-4bbf-96f5-ba47a8473645
00:16:39.477  Traceback (most recent call last):
00:16:39.477    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:16:39.477      main(sys.argv[1:])
00:16:39.477    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:16:39.477      result = client.call(request['method'], request.get('params', {}))
00:16:39.477               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:39.477    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:16:39.477      response = func(request=json_format.ParseDict(params, input()))
00:16:39.477                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:39.477    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:16:39.477      return _end_unary_response_blocking(state, call, False, None)
00:16:39.477             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:39.477    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:16:39.477      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:16:39.477      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:39.477  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:16:39.477  	status = StatusCode.NOT_FOUND
00:16:39.477  	details = "No volume associated with volume_id could be found"
00:16:39.477  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-12-10T22:43:40.132001652+01:00", grpc_status:5, grpc_message:"No volume associated with volume_id could be found"}"
00:16:39.477  >
00:16:39.477   22:43:40 sma.sma_qos -- common/autotest_common.sh@655 -- # es=1
00:16:39.477   22:43:40 sma.sma_qos -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:39.477   22:43:40 sma.sma_qos -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:39.477   22:43:40 sma.sma_qos -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:39.477   22:43:40 sma.sma_qos -- sma/qos.sh@205 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:39.477   22:43:40 sma.sma_qos -- common/autotest_common.sh@652 -- # local es=0
00:16:39.477   22:43:40 sma.sma_qos -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:39.477   22:43:40 sma.sma_qos -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:39.477   22:43:40 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:39.477    22:43:40 sma.sma_qos -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:39.477   22:43:40 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:39.477    22:43:40 sma.sma_qos -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:39.477   22:43:40 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:39.477   22:43:40 sma.sma_qos -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:39.477   22:43:40 sma.sma_qos -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:16:39.477   22:43:40 sma.sma_qos -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:39.736  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:39.736  I0000 00:00:1733867020.367970  175777 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:39.736  I0000 00:00:1733867020.369411  175777 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:39.736  I0000 00:00:1733867020.373473  175782 subchannel.cc:806] subchannel 0x55887854ade0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5588783ea840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x558878564da0, grpc.internal.client_channel_call_destination=0x7fc3bdd8c390, grpc.internal.event_engine=0x5588783d7490, grpc.internal.security_connector=0x5588784fc2b0, grpc.internal.subchannel_pool=0x5588783b9690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5588780d69a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:40.372841683+01:00"}), backing off for 999 ms
00:16:39.736  Traceback (most recent call last):
00:16:39.736    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:16:39.736      main(sys.argv[1:])
00:16:39.736    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:16:39.736      result = client.call(request['method'], request.get('params', {}))
00:16:39.736               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:39.736    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:16:39.736      response = func(request=json_format.ParseDict(params, input()))
00:16:39.736                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:39.736    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:16:39.736      return _end_unary_response_blocking(state, call, False, None)
00:16:39.736             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:39.736    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:16:39.736      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:16:39.736      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:39.736  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:16:39.736  	status = StatusCode.INVALID_ARGUMENT
00:16:39.736  	details = "Invalid volume ID"
00:16:39.736  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Invalid volume ID", grpc_status:3, created_time:"2024-12-10T22:43:40.374609379+01:00"}"
00:16:39.736  >
00:16:39.736   22:43:40 sma.sma_qos -- common/autotest_common.sh@655 -- # es=1
00:16:39.736   22:43:40 sma.sma_qos -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:39.736   22:43:40 sma.sma_qos -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:39.736   22:43:40 sma.sma_qos -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:39.736   22:43:40 sma.sma_qos -- sma/qos.sh@217 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:39.736    22:43:40 sma.sma_qos -- sma/qos.sh@217 -- # uuid2base64 34d35728-097b-480d-996b-c0c45b87ec18
00:16:39.736    22:43:40 sma.sma_qos -- sma/common.sh@20 -- # python
00:16:39.736   22:43:40 sma.sma_qos -- common/autotest_common.sh@652 -- # local es=0
00:16:39.736   22:43:40 sma.sma_qos -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:39.736   22:43:40 sma.sma_qos -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:39.736   22:43:40 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:39.736    22:43:40 sma.sma_qos -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:39.736   22:43:40 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:39.736    22:43:40 sma.sma_qos -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:39.736   22:43:40 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:39.736   22:43:40 sma.sma_qos -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:39.736   22:43:40 sma.sma_qos -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:16:39.736   22:43:40 sma.sma_qos -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:39.995  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:39.995  I0000 00:00:1733867020.654724  175806 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:39.995  I0000 00:00:1733867020.656531  175806 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:39.995  I0000 00:00:1733867020.658130  175813 subchannel.cc:806] subchannel 0x55642dc4ade0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55642daea840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55642dc64da0, grpc.internal.client_channel_call_destination=0x7fe949553390, grpc.internal.event_engine=0x55642d969030, grpc.internal.security_connector=0x55642dbfc2b0, grpc.internal.subchannel_pool=0x55642dab9690, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55642d7d69a0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-10T22:43:40.657363231+01:00"}), backing off for 1000 ms
00:16:39.995  Traceback (most recent call last):
00:16:39.995    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:16:39.995      main(sys.argv[1:])
00:16:39.995    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:16:39.995      result = client.call(request['method'], request.get('params', {}))
00:16:39.995               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:39.995    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:16:39.995      response = func(request=json_format.ParseDict(params, input()))
00:16:39.995                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:39.995    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:16:39.995      return _end_unary_response_blocking(state, call, False, None)
00:16:39.995             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:39.995    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:16:39.995      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:16:39.995      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:39.995  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:16:39.995  	status = StatusCode.NOT_FOUND
00:16:39.995  	details = "Invalid device handle"
00:16:39.995  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Invalid device handle", grpc_status:5, created_time:"2024-12-10T22:43:40.659265382+01:00"}"
00:16:39.995  >
00:16:39.995   22:43:40 sma.sma_qos -- common/autotest_common.sh@655 -- # es=1
00:16:39.995   22:43:40 sma.sma_qos -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:39.995   22:43:40 sma.sma_qos -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:39.995   22:43:40 sma.sma_qos -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:39.995   22:43:40 sma.sma_qos -- sma/qos.sh@230 -- # diff /dev/fd/62 /dev/fd/61
00:16:39.995    22:43:40 sma.sma_qos -- sma/qos.sh@230 -- # jq --sort-keys '.[].assigned_rate_limits'
00:16:39.995    22:43:40 sma.sma_qos -- sma/qos.sh@230 -- # jq --sort-keys
00:16:39.995    22:43:40 sma.sma_qos -- sma/qos.sh@230 -- # rpc_cmd bdev_get_bdevs -b null0
00:16:39.995    22:43:40 sma.sma_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:39.995    22:43:40 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x
00:16:39.995    22:43:40 sma.sma_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:39.995   22:43:40 sma.sma_qos -- sma/qos.sh@241 -- # trap - SIGINT SIGTERM EXIT
00:16:39.995   22:43:40 sma.sma_qos -- sma/qos.sh@242 -- # cleanup
00:16:39.995   22:43:40 sma.sma_qos -- sma/qos.sh@19 -- # killprocess 174643
00:16:39.995   22:43:40 sma.sma_qos -- common/autotest_common.sh@954 -- # '[' -z 174643 ']'
00:16:39.995   22:43:40 sma.sma_qos -- common/autotest_common.sh@958 -- # kill -0 174643
00:16:39.995    22:43:40 sma.sma_qos -- common/autotest_common.sh@959 -- # uname
00:16:39.995   22:43:40 sma.sma_qos -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:16:39.995    22:43:40 sma.sma_qos -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 174643
00:16:39.995   22:43:40 sma.sma_qos -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:16:39.995   22:43:40 sma.sma_qos -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:16:39.995   22:43:40 sma.sma_qos -- common/autotest_common.sh@972 -- # echo 'killing process with pid 174643'
00:16:39.995  killing process with pid 174643
00:16:39.995   22:43:40 sma.sma_qos -- common/autotest_common.sh@973 -- # kill 174643
00:16:39.995   22:43:40 sma.sma_qos -- common/autotest_common.sh@978 -- # wait 174643
00:16:43.281   22:43:43 sma.sma_qos -- sma/qos.sh@20 -- # killprocess 174644
00:16:43.281   22:43:43 sma.sma_qos -- common/autotest_common.sh@954 -- # '[' -z 174644 ']'
00:16:43.281   22:43:43 sma.sma_qos -- common/autotest_common.sh@958 -- # kill -0 174644
00:16:43.281    22:43:43 sma.sma_qos -- common/autotest_common.sh@959 -- # uname
00:16:43.281   22:43:43 sma.sma_qos -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:16:43.281    22:43:43 sma.sma_qos -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 174644
00:16:43.281   22:43:43 sma.sma_qos -- common/autotest_common.sh@960 -- # process_name=python3
00:16:43.281   22:43:43 sma.sma_qos -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:16:43.281   22:43:43 sma.sma_qos -- common/autotest_common.sh@972 -- # echo 'killing process with pid 174644'
00:16:43.281  killing process with pid 174644
00:16:43.281   22:43:43 sma.sma_qos -- common/autotest_common.sh@973 -- # kill 174644
00:16:43.281   22:43:43 sma.sma_qos -- common/autotest_common.sh@978 -- # wait 174644
00:16:43.281  
00:16:43.281  real	0m8.588s
00:16:43.281  user	0m11.467s
00:16:43.281  sys	0m1.224s
00:16:43.281   22:43:43 sma.sma_qos -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:43.281   22:43:43 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x
00:16:43.281  ************************************
00:16:43.281  END TEST sma_qos
00:16:43.281  ************************************
00:16:43.281  
00:16:43.281  real	3m36.586s
00:16:43.281  user	6m18.340s
00:16:43.281  sys	0m21.593s
00:16:43.281   22:43:43 sma -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:43.281   22:43:43 sma -- common/autotest_common.sh@10 -- # set +x
00:16:43.281  ************************************
00:16:43.281  END TEST sma
00:16:43.281  ************************************
00:16:43.281   22:43:43  -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]]
00:16:43.281   22:43:43  -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]]
00:16:43.281   22:43:43  -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT
00:16:43.281   22:43:43  -- spdk/autotest.sh@387 -- # timing_enter post_cleanup
00:16:43.281   22:43:43  -- common/autotest_common.sh@726 -- # xtrace_disable
00:16:43.281   22:43:43  -- common/autotest_common.sh@10 -- # set +x
00:16:43.281   22:43:43  -- spdk/autotest.sh@388 -- # autotest_cleanup
00:16:43.281   22:43:43  -- common/autotest_common.sh@1396 -- # local autotest_es=0
00:16:43.281   22:43:43  -- common/autotest_common.sh@1397 -- # xtrace_disable
00:16:43.281   22:43:43  -- common/autotest_common.sh@10 -- # set +x
00:16:45.184  INFO: APP EXITING
00:16:45.184  INFO: killing all VMs
00:16:45.184  INFO: killing vhost app
00:16:45.184  INFO: EXIT DONE
00:16:46.119  0000:00:04.7 (8086 6f27): Already using the ioatdma driver
00:16:46.119  0000:00:04.6 (8086 6f26): Already using the ioatdma driver
00:16:46.119  0000:00:04.5 (8086 6f25): Already using the ioatdma driver
00:16:46.119  0000:00:04.4 (8086 6f24): Already using the ioatdma driver
00:16:46.119  0000:00:04.3 (8086 6f23): Already using the ioatdma driver
00:16:46.119  0000:00:04.2 (8086 6f22): Already using the ioatdma driver
00:16:46.119  0000:00:04.1 (8086 6f21): Already using the ioatdma driver
00:16:46.119  0000:00:04.0 (8086 6f20): Already using the ioatdma driver
00:16:46.119  0000:80:04.7 (8086 6f27): Already using the ioatdma driver
00:16:46.119  0000:80:04.6 (8086 6f26): Already using the ioatdma driver
00:16:46.119  0000:80:04.5 (8086 6f25): Already using the ioatdma driver
00:16:46.119  0000:80:04.4 (8086 6f24): Already using the ioatdma driver
00:16:46.119  0000:80:04.3 (8086 6f23): Already using the ioatdma driver
00:16:46.119  0000:80:04.2 (8086 6f22): Already using the ioatdma driver
00:16:46.119  0000:80:04.1 (8086 6f21): Already using the ioatdma driver
00:16:46.119  0000:80:04.0 (8086 6f20): Already using the ioatdma driver
00:16:46.119  0000:0d:00.0 (8086 0a54): Already using the nvme driver
00:16:47.054  Cleaning
00:16:47.054  Removing:    /dev/shm/spdk_tgt_trace.pid24830
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid112651
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid113275
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid120418
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid132630
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid138947
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid144959
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid148931
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid148932
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid148933
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid165438
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid169288
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid169908
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid174643
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid20261
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid22205
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid24830
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid26035
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid27386
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid28128
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid29616
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid29841
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid30625
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid31714
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid32425
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid33319
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid34210
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid34439
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid34864
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid35137
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid36211
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid39741
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid40587
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid41322
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid41649
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid43545
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid43756
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid45842
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid46058
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid46702
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid46924
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid47568
00:16:47.054  Removing:    /var/run/dpdk/spdk_pid47784
00:16:47.313  Removing:    /var/run/dpdk/spdk_pid49331
00:16:47.313  Removing:    /var/run/dpdk/spdk_pid49751
00:16:47.313  Removing:    /var/run/dpdk/spdk_pid50034
00:16:47.313  Removing:    /var/run/dpdk/spdk_pid51913
00:16:47.313  Removing:    /var/run/dpdk/spdk_pid65532
00:16:47.313  Removing:    /var/run/dpdk/spdk_pid77147
00:16:47.313  Removing:    /var/run/dpdk/spdk_pid96308
00:16:47.313  Clean
00:16:47.313   22:43:47  -- common/autotest_common.sh@1453 -- # return 0
00:16:47.313   22:43:47  -- spdk/autotest.sh@389 -- # timing_exit post_cleanup
00:16:47.313   22:43:47  -- common/autotest_common.sh@732 -- # xtrace_disable
00:16:47.313   22:43:47  -- common/autotest_common.sh@10 -- # set +x
00:16:47.313   22:43:47  -- spdk/autotest.sh@391 -- # timing_exit autotest
00:16:47.313   22:43:47  -- common/autotest_common.sh@732 -- # xtrace_disable
00:16:47.313   22:43:47  -- common/autotest_common.sh@10 -- # set +x
00:16:47.313   22:43:47  -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/timing.txt
00:16:47.313   22:43:47  -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/udev.log ]]
00:16:47.313   22:43:47  -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/udev.log
00:16:47.313   22:43:47  -- spdk/autotest.sh@396 -- # [[ y == y ]]
00:16:47.313    22:43:47  -- spdk/autotest.sh@398 -- # hostname
00:16:47.313   22:43:47  -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk -t spdk-wfp-17 -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_test.info
00:16:47.313  geninfo: WARNING: invalid characters removed from testname!
00:17:05.404   22:44:05  -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info
00:17:07.939   22:44:08  -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info
00:17:09.842   22:44:10  -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info
00:17:11.747   22:44:12  -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info
00:17:13.652   22:44:14  -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info
00:17:16.186   22:44:16  -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info
00:17:18.091   22:44:18  -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR
00:17:18.091   22:44:18  -- spdk/autorun.sh@1 -- $ timing_finish
00:17:18.091   22:44:18  -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/timing.txt ]]
00:17:18.091   22:44:18  -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl
00:17:18.091   22:44:18  -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]]
00:17:18.091   22:44:18  -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/timing.txt
00:17:18.091  + [[ -n 4134983 ]]
00:17:18.091  + sudo kill 4134983
00:17:18.102  [Pipeline] }
00:17:18.117  [Pipeline] // stage
00:17:18.122  [Pipeline] }
00:17:18.137  [Pipeline] // timeout
00:17:18.143  [Pipeline] }
00:17:18.157  [Pipeline] // catchError
00:17:18.162  [Pipeline] }
00:17:18.177  [Pipeline] // wrap
00:17:18.183  [Pipeline] }
00:17:18.195  [Pipeline] // catchError
00:17:18.205  [Pipeline] stage
00:17:18.207  [Pipeline] { (Epilogue)
00:17:18.221  [Pipeline] catchError
00:17:18.223  [Pipeline] {
00:17:18.236  [Pipeline] echo
00:17:18.238  Cleanup processes
00:17:18.243  [Pipeline] sh
00:17:18.531  + sudo pgrep -af /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:17:18.531  182190 sudo pgrep -af /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:17:18.547  [Pipeline] sh
00:17:18.835  ++ sudo pgrep -af /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:17:18.835  ++ grep -v 'sudo pgrep'
00:17:18.835  ++ awk '{print $1}'
00:17:18.835  + sudo kill -9
00:17:18.835  + true
00:17:18.848  [Pipeline] sh
00:17:19.153  + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh
00:17:27.302  [Pipeline] sh
00:17:27.590  + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh
00:17:27.590  Artifacts sizes are good
00:17:27.606  [Pipeline] archiveArtifacts
00:17:27.615  Archiving artifacts
00:17:27.753  [Pipeline] sh
00:17:28.042  + sudo chown -R sys_sgci: /var/jenkins/workspace/vfio-user-phy-autotest
00:17:28.058  [Pipeline] cleanWs
00:17:28.070  [WS-CLEANUP] Deleting project workspace...
00:17:28.070  [WS-CLEANUP] Deferred wipeout is used...
00:17:28.078  [WS-CLEANUP] done
00:17:28.079  [Pipeline] }
00:17:28.098  [Pipeline] // catchError
00:17:28.114  [Pipeline] sh
00:17:28.401  + logger -p user.info -t JENKINS-CI
00:17:28.411  [Pipeline] }
00:17:28.425  [Pipeline] // stage
00:17:28.430  [Pipeline] }
00:17:28.445  [Pipeline] // node
00:17:28.451  [Pipeline] End of Pipeline
00:17:28.511  Finished: SUCCESS